首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The approximation for the distribution function of test statistic is extremely important in statistics. The standard and higher-order saddlepoint approximations are considered in tails of the limiting distribution for the modified Anderson–Darling test. The saddlepoint approximations are compared with the approximation of Sinclair et al. (1990 Sinclair , C. D. , Spurr , B. D. , Ahmad , M. I. ( 1990 ). Modified Anderson Darling test . Communication Statistics—Theory and Methods 19 : 36773686 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) for upper tail area. An empirical function is derived to estimate the critical values of a saddlepoint approximation.  相似文献   

2.
As a measure of association between two nominal categorical variables, the lambda coefficient or Goodman–Kruskal's lambda has become a most popular measure. Its popularity is primarily due to its simple and meaningful definition and interpretation in terms of the proportional reduction in error when predicting a random observation's category for one variable given (versus not knowing) its category for the other variable. It is an asymmetric measure, although a symmetric version is available. The lambda coefficient does, however, have a widely recognized limitation: it can equal zero even when there is no independence between the variables and when all other measures take on positive values. In order to mitigate this problem, an alternative lambda coefficient is introduced in this paper as a slight modification of the Goodman–Kruskal lambda. The properties of the new measure are discussed and a symmetric form is introduced. A statistical inference procedure is developed and a numerical example is provided.  相似文献   

3.
This article presents a constrained maximization of the Shapiro Wilk W statistic for estimating parameters of the Johnson S B distribution. The gradient of the W statistic with respect to the minimum and range parameters is used within a quasi-Newton framework to achieve a fit for all four parameters. The method is evaluated with measures of bias and precision using pseudo-random samples from three different S B populations. The population means were estimated with an average relative bias of less than 0.1% and the population standard deviations with less than 4.0% relative bias. The methodology appears promising as a tool for fitting this sometimes difficult distribution.  相似文献   

4.
In this paper we introduce a three-parameter lifetime distribution following the Marshall and Olkin [New method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Biometrika. 1997;84(3):641–652] approach. The proposed distribution is a compound of the Lomax and Logarithmic distributions (LLD). We provide a comprehensive study of the mathematical properties of the LLD. In particular, the density function, the shape of the hazard rate function, a general expansion for moments, the density of the rth order statistics, and the mean and median deviations of the LLD are derived and studied in detail. The maximum likelihood estimators of the three unknown parameters of LLD are obtained. The asymptotic confidence intervals for the parameters are also obtained based on asymptotic variance–covariance matrix. Finally, a real data set is analysed to show the potential of the new proposed distribution.  相似文献   

5.
Anderson–Darling goodness-of-fit test percentage points are given for the three-parameter lognormal distribution for both the cases of positive skewness and a lower bound and negative skewness and an upper bound. The focus is on the most practical case when all parameters are unknown and must be estimated from the sample data. Fitted response functions for the critical values based on the shape parameter and sample size are reported to avoid using a vast array of tables.  相似文献   

6.
ABSTRACT

Recent efforts by the American Statistical Association to improve statistical practice, especially in countering the misuse and abuse of null hypothesis significance testing (NHST) and p-values, are to be welcomed. But will they be successful? The present study offers compelling evidence that this will be an extraordinarily difficult task. Dramatic citation-count data on 25 articles and books severely critical of NHST's negative impact on good science, underlining that this issue was/is well known, did nothing to stem its usage over the period 1960–2007. On the contrary, employment of NHST increased during this time. To be successful in this endeavor, as well as restoring the relevance of the statistics profession to the scientific community in the 21st century, the ASA must be prepared to dispense detailed advice. This includes specifying those situations, if they can be identified, in which the p-value plays a clearly valuable role in data analysis and interpretation. The ASA might also consider a statement that recommends abandoning the use of p-values.  相似文献   

7.
The objective of this article is to present a brief chronological record of the American Statistical Association (ASA) from its modest beginnings in Boston in 1839 to its present status as a worldwide professional organization with approximately 19,000 members and a headquarters in Alexandria, Virginia. Emphasis is placed on accomplishments over the past 25 years of the ASA from the end of its Sesquicentennial Celebration in 1989 to the end of its 175th Anniversary Celebration in 2014. Its continued growth during this period has been achieved through the work of outstanding leaders, sections, chapters, and committees. This article briefly summarizes its achievements in organizational efficiency, membership services, innovative meetings, and publications. It also describes its work in structural change, education, public relations, and science policy. It ends with a positive look to the future.  相似文献   

8.
The pretest–posttest design is widely used to investigate the effect of an experimental treatment in biomedical research. The treatment effect may be assessed using analysis of variance (ANOVA) or analysis of covariance (ANCOVA). The normality assumption for parametric ANOVA and ANCOVA may be violated due to outliers and skewness of data. Nonparametric methods, robust statistics, and data transformation may be used to address the nonnormality issue. However, there is no simultaneous comparison for the four statistical approaches in terms of empirical type I error probability and statistical power. We studied 13 ANOVA and ANCOVA models based on parametric approach, rank and normal score-based nonparametric approach, Huber M-estimation, and Box–Cox transformation using normal data with and without outliers and lognormal data. We found that ANCOVA models preserve the nominal significance level better and are more powerful than their ANOVA counterparts when the dependent variable and covariate are correlated. Huber M-estimation is the most liberal method. Nonparametric ANCOVA, especially ANCOVA based on normal score transformation, preserves the nominal significance level, has good statistical power, and is robust for data distribution.  相似文献   

9.
We suggest finite sample tests for the location of the efficient frontier with the estimated parameters in mean–variance space. The exact densities of the test statistics are derived. We implement the introduced testing procedure empirically by considering monthly returns of ten developed stock markets. It is shown that ignoring the uncertainty about the estimated parameters leads to a more frequent reconstruction of the efficient frontier.  相似文献   

10.
11.
The orthogonalization of undesigned experiments is introduced to increase statistical precision of the estimated regression coefficients. The goals are to minimize the covariance and the bias of the least squares estimator for estimating the path of the steepest ascent (SA) that leads the users toward the neighbour of the optimum response. An orthogonal design is established to decrease the inverse determinant of XX and the angle between the true and the estimated SA paths. For orthogonalization of an undesigned matrix, our proposed solution is constructed on the modified Gram–Schmidt strategy relevant to the process of Gaussian elimination. The proposed solution offers an orthogonal basis, in full working accuracy, for the space spanned by the columns of the original matrix.  相似文献   

12.
ABSTRACT

We propose a new estimator for the spot covariance matrix of a multi-dimensional continuous semimartingale log asset price process, which is subject to noise and nonsynchronous observations. The estimator is constructed based on a local average of block-wise parametric spectral covariance estimates. The latter originate from a local method of moments (LMM), which recently has been introduced by Bibinger et al.. We prove consistency and a point-wise stable central limit theorem for the proposed spot covariance estimator in a very general setup with stochastic volatility, leverage effects, and general noise distributions. Moreover, we extend the LMM estimator to be robust against autocorrelated noise and propose a method to adaptively infer the autocorrelations from the data. Based on simulations we provide empirical guidance on the effective implementation of the estimator and apply it to high-frequency data of a cross-section of Nasdaq blue chip stocks. Employing the estimator to estimate spot covariances, correlations, and volatilities in normal but also unusual periods yields novel insights into intraday covariance and correlation dynamics. We show that intraday (co-)variations (i) follow underlying periodicity patterns, (ii) reveal substantial intraday variability associated with (co-)variation risk, and (iii) can increase strongly and nearly instantaneously if new information arrives. Supplementary materials for this article are available online.  相似文献   

13.
An alternative to the maximum likelihood (ML) method, the maximum spacing (MSP) method, is introduced in Cheng and Amin [1983. Estimating parameters in continuous univariate distributions with a shifted origin. J. Roy. Statist. Soc. Ser. B 45, 394–403], and independently in Ranneby [1984. The maximum spacing method. An estimation method related to the maximum likelihood method. Scand. J. Statist. 11, 93–112]. The method, as described by Ranneby [1984. The maximum spacing method. An estimation method related to the maximum likelihood method. Scand. J. Statist. 11, 93–112], is derived from an approximation of the Kullback–Leibler divergence. Since the introduction of the MSP method, several closely related methods have been suggested. This article is a survey of such methods based on spacings and the Kullback–Leibler divergence. These estimation methods possess good properties and they work in situations where the ML method does not. Important issues such as the handling of ties and incomplete data are discussed, and it is argued that by using Moran's [1951. The random division of an interval—Part II. J. Roy. Statist. Soc. Ser. B 13, 147–150] statistic, on which the MSP method is based, we can effectively combine: (a) a test on whether an assigned model of distribution functions is correct or not, (b) an asymptotically efficient estimation of an unknown parameter θ0θ0, and (c) a computation of a confidence region for θ0θ0.  相似文献   

14.
15.
Abstract

In this article, we propose a two-stage generalized case–cohort design and develop an efficient inference procedure for the data collected with this design. In the first-stage, we observe the failure time, censoring indicator and covariates which are easy or cheap to measure, and in the second-stage, select a subcohort by simple random sampling and a subset of failures in remaining subjects from the first-stage subjects to observe their exposures which are different or expensive to measure. We derive estimators for regression parameters in the accelerated failure time model under the two-stage generalized case–cohort design through the estimated augmented estimating equation and the kernel function method. The resulting estimators are shown to be consistent and asymptotically normal. The finite sample performance of the proposed method is evaluated through the simulation studies. The proposed method is applied to a real data set from the National Wilm’s Tumor Study Group.  相似文献   

16.
17.
This paper provides a complete proof of the Welch–Berlekamp theorem on which the Welch–Berlekamp algorithm was founded. By introducing an analytic approach to coset–leader decoders for Reed–Solomon codes, the Welch–Berlekamp key-equation of error corrections is enlarged and a complete proof of the Welch–Berlekamp theorem is derived in a natural way, and the theorem is extended such that the BCH-bound constraint is moved.  相似文献   

18.
19.
20.
The exponentially weighted moving average (EWMA) control charts with variable sampling intervals (VSIs) have been shown to be substantially quicker than the fixed sampling intervals (FSI) EWMA control charts in detecting process mean shifts. The usual assumption for designing a control chart is that the data or measurements are normally distributed. However, this assumption may not be true for some processes. In the present paper, the performances of the EWMA and combined –EWMA control charts with VSIs are evaluated under non-normality. It is shown that adding the VSI feature to the EWMA control charts results in very substantial decreases in the expected time to detect shifts in process mean under both normality and non-normality. However, the combined –EWMA chart has its false alarm rate and its detection ability is affected if the process data are not normally distributed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号