首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A survey of research by Emanuel Parzen on how quantile functions provide elegant and applicable formulas that unify many statistical methods, especially frequentist and Bayesian confidence intervals and prediction distributions. Section 0: In honor of Ted Anderson's 90th birthday; Section 1: Quantile functions, endpoints of prediction intervals; Section 2: Extreme value limit distributions; Sections 3, 4: Confidence and prediction endpoint function: Uniform(0,θ)(0,θ), exponential; Sections: 5, 6: Confidence quantile and Bayesian inference normal parameters μμ, σσ; Section 7: Two independent samples confidence quantiles; Section 8: Confidence quantiles for proportions, Wilson's formula. We propose ways that Bayesians and frequentists can be friends!  相似文献   

2.
This paper introduces a median estimator of the logistic regression parameters. It is defined as the classical L1L1-estimator applied to continuous data Z1,…,ZnZ1,,Zn obtained by a statistical smoothing of the original binary logistic regression observations Y1,…,YnY1,,Yn. Consistency and asymptotic normality of this estimator are proved. A method called enhancement is introduced which in some cases increases the efficiency of this estimator. Sensitivity to contaminations and leverage points is studied by simulations and compared in this manner with the sensitivity of some robust estimators previously introduced to the logistic regression. The new estimator appears to be more robust for larger sample sizes and higher levels of contamination.  相似文献   

3.
Testing homogeneity is a fundamental problem in finite mixture models. It has been investigated by many researchers and most of the existing works have focused on the univariate case. In this article, the authors extend the use of the EM‐test for testing homogeneity to multivariate mixture models. They show that the EM‐test statistic asymptotically has the same distribution as a certain transformation of a single multivariate normal vector. On the basis of this result, they suggest a resampling procedure to approximate the P‐value of the EM‐test. Simulation studies show that the EM‐test has accurate type I errors and adequate power, and is more powerful and computationally efficient than the bootstrap likelihood ratio test. Two real data sets are analysed to illustrate the application of our theoretical results. The Canadian Journal of Statistics 39: 218–238; 2011 © 2011 Statistical Society of Canada  相似文献   

4.
5.
6.
The Pareto distribution is found in a large number of real world situations and is also a well-known model for extreme events. In the spirit of Neyman [1937. Smooth tests for goodness of fit. Skand. Aktuarietidskr. 20, 149–199] and Thomas and Pierce [1979. Neyman's smooth goodness-of-fit test when the hypothesis is composite. J. Amer. Statist. Assoc. 74, 441–445], we propose a smooth goodness of fit test for the Pareto distribution family which is motivated by LeCam's theory of local asymptotic normality (LAN). We establish the behavior of the associated test statistic firstly under the null hypothesis that the sample follows a Pareto distribution and secondly under local alternatives using the LAN framework. Finally, simulations are provided in order to study the finite sample behavior of the test statistic.  相似文献   

7.
A unified approach of parameter-estimation and goodness-of-fit testing is proposed. The new procedures may be applied to arbitrary laws with continuous distribution function. Specifically, both the method of estimation and the goodness-of-fit test are based on the idea of optimally transforming the original data to the uniform distribution, the criterion of optimality being an L2-type distance between the empirical characteristic function of the transformed data, and the characteristic function of the uniform (0,1)(0,1) distribution. Theoretical properties of the new estimators and tests are studied and some connections with classical statistics, moment-based procedures and non-parametric methods are investigated. Comparison with standard procedures via Monte Carlo is also included, along with a real-data application.  相似文献   

8.
Consider two independent normal populations. Let R denote the ratio of the variances. The usual procedure for testing H0: R = 1 vs. H1: R = r, where r≠1, is the F-test. Let θ denote the proportion of observations to be allocated to the first population. Here we find the value of θ that maximizes the rate at which the observed significance level of the F-test converges to zero under H1, as measured by the half slope.  相似文献   

9.
In this paper tests of hypothesis are constructed for the family of skew normal distributions. The proposed tests utilize the fact that the moment generating function of the skew normal variable satisfies a simple differential equation. The empirical counterpart of this equation, involving the empirical moment generating function, yields simple consistent test statistics. Finite-sample results as well as results from real data are provided for the proposed procedures.  相似文献   

10.
11.
Lu Lin 《Statistical Papers》2004,45(4):529-544
The quasi-score function, as defined by Wedderburn (1974) and McCullagh (1983) and so on, is a linear function of observations. The generalized quasi-score function introduced in this paper is a linear function of some unbiased basis functions, where the unbiased basis functions may be some linear functions of the observations or not, and can be easily constructed by the meaning of the parameters such as mean and median and so on. The generalized quasi-likelihood estimate obtained by such a generalized quasi-score function is consistent and has an asymptotically normal distribution. As a result, the optimum generalized quasi-score is obtained and a method to construct the optimum unbiased basis function is introduced. In order to construct the potential function, a conservative generalized estimating function is defined. By conservative, a potential function for the projected score has many properties of a log-likelihood function. Finally, some examples are given to illustrate the theoretical results. This paper is supported by NNSF project (10371059) of China and Youth Teacher Foundation of Nankai University.  相似文献   

12.
We discuss the general form of a first-order correction to the maximum likelihood estimator which is expressed in terms of the gradient of a function, which could for example be the logarithm of a prior density function. In terms of Kullback–Leibler divergence, the correction gives an asymptotic improvement over maximum likelihood under rather general conditions. The theory is illustrated for Bayes estimators with conjugate priors. The optimal choice of hyper-parameter to improve the maximum likelihood estimator is discussed. The results based on Kullback–Leibler risk are extended to a wide class of risk functions.  相似文献   

13.
When combining estimates of a common parameter (of dimension d?1d?1) from independent data sets—as in stratified analyses and meta analyses—a weighted average, with weights ‘proportional’ to inverse variance matrices, is shown to have a minimal variance matrix (a standard fact when d=1d=1)—minimal in the sense that all convex combinations of the coordinates of the combined estimate have minimal variances. Minimum variance for the estimation of a single coordinate of the parameter can therefore be achieved by joint estimation of all coordinates using matrix weights. Moreover, if each estimate is asymptotically efficient within its own data set, then this optimally weighted average, with consistently estimated weights, is shown to be asymptotically efficient in the combined data set and avoids the need to merge the data sets and estimate the parameter in question afresh. This is so whatever additional non-common nuisance parameters may be in the models for the various data sets. A special case of this appeared in Fisher [1925. Theory of statistical estimation. Proc. Cambridge Philos. Soc. 22, 700–725.]: Optimal weights are ‘proportional’ to information matrices, and he argued that sample information should be used as weights rather than expected information, to maintain second-order efficiency of maximum likelihood. A number of special cases have appeared in the literature; we review several of them and give additional special cases, including stratified regression analysis—proportional-hazards, logistic or linear—, combination of independent ROC curves, and meta analysis. A test for homogeneity of the parameter across the data sets is also given.  相似文献   

14.
15.
Abstract

In a 2-step monotone missing dataset drawn from a multivariate normal population, T2-type test statistic (similar to Hotelling’s T2 test statistic) and likelihood ratio (LR) are often used for the test for a mean vector. In complete data, Hotelling’s T2 test and LR test are equivalent, however T2-type test and LR test are not equivalent in the 2-step monotone missing dataset. Then we interest which statistic is reasonable with relation to power. In this paper, we derive asymptotic power function of both statistics under a local alternative and obtain an explicit form for difference in asymptotic power function. Furthermore, under several parameter settings, we compare LR and T2-type test numerically by using difference in empirical power and in asymptotic power function. Summarizing obtained results, we recommend applying LR test for testing a mean vector.  相似文献   

16.
Continuous non-Gaussian stationary processes of the OU-type are becoming increasingly popular given their flexibility in modelling stylized features of financial series such as asymmetry, heavy tails and jumps. The use of non-Gaussian marginal distributions makes likelihood analysis of these processes unfeasible for virtually all cases of interest. This paper exploits the self-decomposability of the marginal laws of OU processes to provide explicit expressions of the characteristic function which can be applied to several models as well as to develop efficient estimation techniques based on the empirical characteristic function. Extensions to OU-based stochastic volatility models are provided.  相似文献   

17.
Recently Jammalamadaka and Mangalam [2003. Non-parametric estimation for middle censored data. J. Nonparametric Statist. 15, 253–265] introduced a general censoring scheme called the “middle-censoring” scheme in non-parametric set up. In this paper we consider this middle-censoring scheme when the lifetime distribution of the items is exponentially distributed and the censoring mechanism is independent and non-informative. In this set up, we derive the maximum likelihood estimator and study its consistency and asymptotic normality properties. We also derive the Bayes estimate of the exponential parameter under a gamma prior. Since a theoretical construction of the credible interval becomes quite difficult, we propose and implement Gibbs sampling technique to construct the credible intervals. Monte Carlo simulations are performed to evaluate the small sample behavior of the techniques proposed. A real data set is analyzed to illustrate the practical application of the proposed methods.  相似文献   

18.
Hartley's test for homogeneity of k normal‐distribution variances is based on the ratio between the maximum sample variance and the minimum sample variance. In this paper, the author uses the same statistic to test for equivalence of k variances. Equivalence is defined in terms of the ratio between the maximum and minimum population variances, and one concludes equivalence when Hartley's ratio is small. Exact critical values for this test are obtained by using an integral expression for the power function and some theoretical results about the power function. These exact critical values are available both when sample sizes are equal and when sample sizes are unequal. One related result in the paper is that Hartley's test for homogeneity of variances is no longer unbiased when the sample sizes are unequal. The Canadian Journal of Statistics 38: 647–664; 2010 © 2010 Statistical Society of Canada  相似文献   

19.
Ranked set sampling (RSS) was first proposed by McIntyre [1952. A method for unbiased selective sampling, using ranked sets. Australian J. Agricultural Res. 3, 385–390] as an effective way to estimate the unknown population mean. Chuiv and Sinha [1998. On some aspects of ranked set sampling in parametric estimation. In: Balakrishnan, N., Rao, C.R. (Eds.), Handbook of Statistics, vol. 17. Elsevier, Amsterdam, pp. 337–377] and Chen et al. [2004. Ranked Set Sampling—Theory and Application. Lecture Notes in Statistics, vol. 176. Springer, New York] have provided excellent surveys of RSS and various inferential results based on RSS. In this paper, we use the idea of order statistics from independent and non-identically distributed (INID) random variables to propose ordered ranked set sampling (ORSS) and then develop optimal linear inference based on ORSS. We determine the best linear unbiased estimators based on ORSS (BLUE-ORSS) and show that they are more efficient than BLUE-RSS for the two-parameter exponential, normal and logistic distributions. Although this is not the case for the one-parameter exponential distribution, the relative efficiency of the BLUE-ORSS (to BLUE-RSS) is very close to 1. Furthermore, we compare both BLUE-ORSS and BLUE-RSS with the BLUE based on order statistics from a simple random sample (BLUE-OS). We show that BLUE-ORSS is uniformly better than BLUE-OS, while BLUE-RSS is not as efficient as BLUE-OS for small sample sizes (n<5n<5).  相似文献   

20.
We propose optimal procedures to achieve the goal of partitioning k multivariate normal populations into two disjoint subsets with respect to a given standard vector. Definition of good or bad multivariate normal populations is given according to their Mahalanobis distances to a known standard vector as being small or large. Partitioning k multivariate normal populations is reduced to partitioning k non-central Chi-square or non-central F distributions with respect to the corresponding non-centrality parameters depending on whether the covariance matrices are known or unknown. The minimum required sample size for each population is determined to ensure that the probability of correct decision attains a certain level. An example is given to illustrate our procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号