首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 12 毫秒
1.
Statistical inference procedures based on transforms such as characteristic function and probability generating function have been examined by many researchers because they are much simpler than probability density functions. Here, a probability generating function based Jeffrey's divergence measure is proposed for parameter estimation and goodness-of-fit test. Being a member of the M-estimators, the proposed estimator is consistent. Also, the proposed goodness-of-fit test has good statistical power. The proposed divergence measure shows improved performance over existing probability generating function based measures. Real data examples are given to illustrate the proposed parameter estimation method and goodness-of-fit test.  相似文献   

2.
The generalized empirical likelihood (GEL) method produces a class of estimators of parameters defined via general estimating equations. This class includes several important estimators, such as empirical likelihood (EL), exponential tilting (ET), and continuous updating estimators (CUE). We examine the information geometric structure of GEL estimators. We introduce a class of estimators closely related to the class of minimum divergence (MD) estimators and show that there is a one-to-one correspondence between this class and the class GEL.  相似文献   

3.
This article considers a probability generating function-based divergence statistic for parameter estimation. The performance and robustness of the proposed statistic in parameter estimation is studied for the negative binomial distribution by Monte Carlo simulation, especially in comparison with the maximum likelihood and minimum Hellinger distance estimation. Numerical examples are given as illustration of goodness of fit.  相似文献   

4.
This article studies the minimum divergence (MD) class of estimators for econometric models specified through moment restrictions. We show that MD estimators can be obtained as solutions to a tractable lower dimensional optimization problem. This problem is similar to the one solved by the generalized empirical likelihood estimators of Newey and Smith (2004 Newey , W. K. , Smith , R. J. ( 2004 ). Higher order properties of GMM and Generalized Empirical Likelihood estimators . Econometrica 72 : 219255 .[Crossref], [Web of Science ®] [Google Scholar]), but it is equivalent to it only for a subclass of divergences. The MD framework provides a coherent testing theory: tests for overidentification and parametric restrictions in this framework can be interpreted as semiparametric versions of Pearson-type goodness of fit tests. The higher order properties of MD estimators are also studied and it is shown that MD estimators that have the same higher order bias as the empirical likelihood (EL) estimator also share the same higher order mean square error and are all higher order efficient. We identify members of the MD class that are not only higher order efficient, but also, unlike the EL estimator, well behaved when the moment restrictions are misspecified.  相似文献   

5.
金华 《统计研究》2007,24(7):75-78
2006年足球单场竞猜异常火爆,而全国联网篮彩却一直惨淡经营,竞猜型广东篮球彩票也只试行四个月就夭折,主要问题是篮球玩法和奖金设置不够合理.如何挽救低迷中的篮彩.是一个值得研究的课题,为使广东篮彩成为未来篮彩市场的主流.本文建议提高广东篮彩一等奖的返奖率,并给予保本优惠,再增设二等奖,在此基础上利用美国全国篮球联赛2004-2005赛季的数据建立概率模型来估计中奖概率,为奖金合理设置提供有益的参考.  相似文献   

6.
This study takes up inference in linear models with generalized error and generalized t distributions. For the generalized error distribution, two computational algorithms are proposed. The first is based on indirect Bayesian inference using an approximating finite scale mixture of normal distributions. The second is based on Gibbs sampling. The Gibbs sampler involves only drawing random numbers from standard distributions. This is important because previously the impression has been that an exact analysis of the generalized error regression model using Gibbs sampling is not possible. Next, we describe computational Bayesian inference for linear models with generalized t disturbances based on Gibbs sampling, and exploiting the fact that the model is a mixture of generalized error distributions with inverse generalized gamma distributions for the scale parameter. The linear model with this specification has also been thought not to be amenable to exact Bayesian analysis. All computational methods are applied to actual data involving the exchange rates of the British pound, the French franc, and the German mark relative to the U.S. dollar.  相似文献   

7.
Abstract.  Prediction error is critical to assess model fit and evaluate model prediction. We propose the cross-validation (CV) and approximated CV methods for estimating prediction error under the Bregman divergence (BD), which embeds nearly all of the commonly used loss functions in the regression, classification procedures and machine learning literature. The approximated CV formulas are analytically derived, which facilitate fast estimation of prediction error under BD. We then study a data-driven optimal bandwidth selector for local-likelihood estimation that minimizes the overall prediction error or equivalently the covariance penalty. It is shown that the covariance penalty and CV methods converge to the same mean-prediction-error-criterion. We also propose a lower-bound scheme for computing the local logistic regression estimates and demonstrate that the algorithm monotonically enhances the target local likelihood and converges. The idea and methods are extended to the generalized varying-coefficient models and additive models.  相似文献   

8.
Recently, a technique based on pseudo‐observations has been proposed to tackle the so‐called convex hull problem for the empirical likelihood statistic. The resulting adjusted empirical likelihood also achieves the high‐order precision of the Bartlett correction. Nevertheless, the technique induces an upper bound on the resulting statistic that may lead, in certain circumstances, to worthless confidence regions equal to the whole parameter space. In this paper, we show that suitable pseudo‐observations can be deployed to make each element of the generalized power divergence family Bartlett‐correctable and released from the convex hull problem. Our approach is conceived to achieve this goal by means of two distinct sets of pseudo‐observations with different tasks. An important effect of our formulation is to provide a solution that permits to overcome the problem of the upper bound. The proposal, which effectiveness is confirmed by simulation results, gives back attractiveness to a broad class of statistics that potentially contains good alternatives to the empirical likelihood.  相似文献   

9.
In this article, we study some relevant information divergence measures viz. Renyi divergence and Kerridge’s inaccuracy measures. These measures are extended to conditionally specified models and they are used to characterize some bivariate distributions using the concepts of weighted and proportional hazard rate models. Moreover, some bounds are obtained for these measures using the likelihood ratio order.  相似文献   

10.
In this article we propose a modification of the recently introduced divergence information criterion (DIC, Mattheou et al., 2009 Mattheou , K. , Lee , S. , Karagrigoriou , A. ( 2009 ). A model selection criterion based on the BHHJ measure of divergence . Journal of Statistical Planning and Inference 139 : 128135 .[Crossref], [Web of Science ®] [Google Scholar]) for the determination of the order of an autoregressive process and show that it is an asymptotically unbiased estimator of the expected overall discrepancy, a nonnegative quantity that measures the distance between the true unknown model and a fitted approximating model. Further, we use Monte Carlo methods and various data generating processes for small, medium, and large sample sizes in order to explore the capabilities of the new criterion in selecting the optimal order in autoregressive processes and in general in a time series context. The new criterion shows remarkably good results by choosing the correct model more frequently than traditional information criteria.  相似文献   

11.
An appealing, but invalid, derivation of the probability that at least one of n events occurs is justified, using a particular definition of subtraction of events. The probabilities that exactly m and at least m of the n events occur are derived similarly.  相似文献   

12.
Upper bounds for the expected time to extinction in the Galton-Watson process are obtained. We also found upper and lower bounds for the probability of extinction of this process. These bounds improve some bounds previously obtained by other authors.  相似文献   

13.
陶然 《统计研究》2012,29(12):81-87
根据普查数据生成过程,将实际普查汇总结果与目标总体真值的净误差定义为普查涵盖误差;从非抽样误差的作用分析,提出涵盖误差来源影响的三个假设,并论证采用净误差表现普查涵盖误差的合理性。在此基础上,将涵盖误差的产生机制和普查数据汇总模型结合,构建不同普查类型下计数与内容涵盖误差的模型与误差分解过程;以此论述了非抽样误差对涵盖误差的影响作用,以及计数涵盖误差和内容涵盖误差间的联系,为进一步研究普查数据质量评估与控制奠定理论基础。  相似文献   

14.
In this paper we study polytomous logistic regression model and the asymptotic properties of the minimum ϕ-divergence estimators for this model. A simulation study is conducted to analyze the behavior of these estimators as function of the power-divergence measure ϕ(λ) Research partially done when was visiting the Bowling Green State University as the Distinguished Lukacs Professor  相似文献   

15.
We compared the robustness of univariate and multivariate statistical procedures to control Type I error rates when the normality and homocedasticity assumptions were not fulfilled. The procedures we evaluated are the mixed model adjusted by means of the SAS Proc Mixed module, and Bootstrap-F approach, Brown–Forsythe multivariate approach, Welch–James multivariate approach, and Welch–James multivariate approach with robust estimators. The results suggest that the Kenward Roger, Brown–Forsythe, Welch–James, and Improved Generalized Aprroximate procedures satisfactorily kept Type I error rates within the nominal levels for both the main and interaction effects under most of the conditions assessed.  相似文献   

16.
An increasing number of contemporary datasets are high dimensional. Applications require these datasets be screened (or filtered) to select a subset for further study. Multiple testing is the standard tool in such applications, although alternatives have begun to be explored. In order to assess the quality of selection in these high-dimensional contexts, Cui and Wilson (2008b Cui , X. , Wilson , J. ( 2008b ). On the probability of correct selection for large k populations with application to microarray data . Biometrical Journal 50 ( 5 ): 870883 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) proposed two viable methods of calculating the probability that any such selection is correct (PCS). PCS thereby serves as a measure of the quality of competing statistics used for selection. The first simulation study of this article investigates the two PCS statistics of the above article. It shows that in the high-dimensional case PCS can be accurately estimated and is robust under certain conditions. The second simulation study investigates a nonparametric estimator of PCS.  相似文献   

17.
The impact of ignoring the stratification effect on the probability of a Type I error is investigated. The evaluation is in a clinical setting where the treatments may have different response rates among the strata. Deviation from the nominal probability of a Type I error, α, depends on the stratification imbalance and the heterogeneity in the response rates; it appears that the latter has a larger impact. The probability of a Type I error is depicted for cases in which the heterogeneity in the response rate is present but there is no stratification imbalance. Three-dimensional graphs are used to demonstrate the simultaneous impact of heterogeneity in response rates and of stratification imbalance.  相似文献   

18.
φ-divergence .statistics are obtained by either replacing both distributions involved in the argument of the φ -divergence measure by their sample estimates or replacing one distribution and considering the other as given. The sampling properties of estimated divergence-type measures are investigated. Approximate means and variances are derived and asymptotic distributions are obtained. Tests of goodness of fit of observed frequencies to expected ones and tests of equality of divergences based on two or more multinomial samples are constructed.  相似文献   

19.
 估算独生子女和非独生子女之间婚配概率及婚配对数是生育政策仿真的关键技术之一。本文首次提出同龄概率法及多龄概率法,并对全国层面独生子女之间、独生子女与非独生子女之间、非独生子女之间的婚配概率及婚配对数进行了估算,详细阐述了两种方法的原理及运算步骤,并对两种方法的运算结果进行分析比较。结果表明,这两种方法都可以计算独生子女和非独生子女之间多种婚配概率,并能估算出各类婚配夫妇对数。其中,同龄概率法较直观,数据易取得, 但与实际存在一定的偏差;多龄概率法更接近于现实,受婚配对象人数突变的影响更小。  相似文献   

20.
In a recent article, Cardoso de Oliveira and Ferreira have proposed a multivariate extension of the univariate chi-squared normality test, using a known result for the distribution of quadratic forms in normal variables. In this article, we propose a family of power divergence type test statistics for testing the hypothesis of multinormality. The proposed family of test statistics includes as a particular case the test proposed by Cardoso de Oliveira and Ferreira. We assess the performance of the new family of test statistics by using Monte Carlo simulation. In this context, the type I error rates and the power of the tests are studied, for important family members. Moreover, the performance of significant members of the proposed test statistics are compared with the respective performance of a multivariate normality test, proposed recently by Batsidis and Zografos. Finally, two well-known data sets are used to illustrate the method developed in this article as well as the specialized test of multivariate normality proposed by Batsidis and Zografos.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号