RESUMEN
The two-parameter gamma distribution is one of the most commonly used distributions in analyzing environmental, meteorological, medical, and survival data. It has a two-dimensional minimal sufficient statistic, and the two parameters can be taken to be the mean and shape parameters. This makes it closely comparable to the normal model, but it differs substantially in that the exact distribution for the minimal sufficient statistic is not available. A Bartlett-type correction of the log-likelihood ratio statistic is proposed for the one-sample gamma mean problem and extended to testing for homogeneity of k≥2 independent gamma means. The exact correction factor, in general, does not exist in closed form. In this paper, a simulation algorithm is proposed to obtain the correction factor numerically. Real-life examples and simulation studies are used to illustrate the application and the accuracy of the proposed method.
RESUMEN
Complex data pose unique challenges for data processing [...].
RESUMEN
This paper reviews the potential use of fuzzy c-means clustering (FCM) and explores modifications to the distance function and centroid initialization methods to enhance image segmentation. The application of interest in the paper is the segmentation of breast tumours in mammograms. Breast cancer is the second leading cause of cancer deaths in Canadian women. Early detection reduces treatment costs and offers a favourable prognosis for patients. Classical methods, like mammograms, rely on radiologists to detect cancerous tumours, which introduces the potential for human error in cancer detection. Classical methods are labour-intensive, and, hence, expensive in terms of healthcare resources. Recent research supplements classical methods with automated mammogram analysis. The basic FCM method relies upon the Euclidean distance, which is not optimal for measuring non-spherical structures. To address these limitations, we review the implementation of a Mahalanobis-distance-based FCM (FCM-M). The three objectives of the paper are: (1) review FCM, FCM-M, and three centroid initialization algorithms in the literature, (2) illustrate the effectiveness of these algorithms in image segmentation, and (3) develop a Python package with the optimized algorithms to upload onto GitHub. Image analysis of the algorithms shows that using one of the three centroid initialization algorithms enhances the performance of FCM. FCM-M produced higher clustering accuracy and outlined the tumour structure better than basic FCM.
RESUMEN
The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie's method (1981) and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.
RESUMEN
The linear model often serves as a starting point for applying statistics in psychology. Often, formal training beyond the linear model is limited, creating a potential pedagogical gap because of the pervasiveness of data non-normality. We reviewed 61 recently published undergraduate and graduate textbooks on introductory statistics and the linear model, focusing on their treatment of non-normality. This review identified at least eight distinct methods suggested to address non-normality, which we organize into a new taxonomy according to whether the approach: (a) remains within the linear model, (b) changes the data, and (c) treats normality as informative or as a nuisance. Because textbook coverage of these methods was often cursory, and methodological papers introducing these approaches are usually inaccessible to non-statisticians, this review is designed to be the happy medium. We provide a relatively non-technical review of advanced methods which can address non-normality (and heteroscedasticity), thereby serving a starting point to promote best practice in the application of the linear model. We also present three empirical examples to highlight distinctions between these methods' motivations and results. The paper also reviews the current state of methodological research in addressing non-normality within the linear modeling framework. It is anticipated that our taxonomy will provide a useful overview and starting place for researchers interested in extending their knowledge in approaches developed to address non-normality from the perspective of the linear model.