首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
According to the likelihood principle, if the designs produce proportional likelihood functions, one could make an identical inference about a parameter from the data irrespective of the design, which yields the data. If it comes to that, there are several counter-examples, and/or paradoxical consequences to likelihood principle. Besides, as we will see, contrary to a widely held opinion, such a principle is not a direct consequence of Bayes theorem. In particular, the piece of information about the design is one part of the evidence, and it is relevant for the prior. Later on, Jeffreys non-informative prior is used to show how different designs result in different priors. Another basic idea of the present paper is that (apart from other information) the equiprobability assumption is to be linked to the idea of the impartiality of design with respect to the parameter under consideration. The whole paper has remarkable implications on the foundations of statistics from the notion of sufficiency, the relevance of the stopping rule and of the randomization in survey sampling and in the experimental design, the difference between ignorable and non-ignorable designs, until a reconciliation of different approaches to the inductive reasoning in statistical inference.  相似文献   

2.
3.
The weighted likelihood is a generalization of the likelihood designed to borrow strength from similar populations while making minimal assumptions. If the weights are properly chosen, the maximum weighted likelihood estimate may perform better than the maximum likelihood estimate (MLE). In a previous article, the minimum averaged mean squared error (MAMSE) weights are proposed and simulations show that they allow to outperform the MLE in many cases. In this paper, we study the asymptotic properties of the MAMSE weights. In particular, we prove that the MAMSE-weighted mixture of empirical distribution functions converges uniformly to the target distribution and that the maximum weighted likelihood estimate is strongly consistent. A short simulation illustrates the use of bootstrap in this context.  相似文献   

4.
We provide Monte Carlo evidence on the finite-sample behavior of the conditional empirical likelihood (CEL) estimator of Kitamura, Tripathi, and Ahn and the conditional Euclidean empirical likelihood (CEEL) estimator of Antoine, Bonnal, and Renault in the context of a heteroscedastic linear model with an endogenous regressor. We compare these estimators with three heteroscedasticity-consistent instrument-based estimators and the Donald, Imbens, and Newey estimator in terms of various performance measures. Our results suggest that the CEL and CEEL with fixed bandwidths may suffer from the no-moment problem, similarly to the unconditional generalized empirical likelihood estimators studied by Guggenberger. We also study the CEL and CEEL estimators with automatic bandwidths selected through cross-validation. We do not find evidence that these suffer from the no-moment problem. When the instruments are weak, we find CEL and CEEL to have finite-sample properties—in terms of mean squared error and coverage probability of confidence intervals—poorer than the heteroscedasticity-consistent Fuller (HFUL) estimator. In the strong instruments case, the CEL and CEEL estimators with automatic bandwidths tend to outperform HFUL in terms of mean squared error, while the reverse holds in terms of the coverage probability, although the differences in numerical performance are rather small.  相似文献   

5.
We define the maximum-relevance weighted likelihood estimator (MREWLE) using the relevance-weighted likelihood function introduced by Hu and Zidek (1995). Furthermore, we establish the consistency of the MREWLE under a wide range of conditions. Our results generalize those of Wald (1948) to both nonidentically distributed random variables and unequally weighted likelihoods (when dealing with independent data sets of varying relevance to the inferential problem of interest). Asymptotic normality is also proven. Applying these results to generalized smoothing model is discussed.  相似文献   

6.
In this paper I examine finite sample properties of the maximum likelihood and quasi-maximum likelihood estimators of EGARCH(1,1) processes using Monte Carlo methods. I use response surface methodology to summarize the results of a wide array of experiments which suggest that the maximum likelihood estimator has reasonable finite sample properties. The Gaussian quasi-maximum likelihood estimator has poor finite sample properties when the data generating process has conditional excess kurtosis. Some of these poor properties appear to be asymptotic in nature.  相似文献   

7.
In this paper I examine finite sample properties of the maximum likelihood and quasi-maximum likelihood estimators of EGARCH(1,1) processes using Monte Carlo methods. I use response surface methodology to summarize the results of a wide array of experiments which suggest that the maximum likelihood estimator has reasonable finite sample properties. The Gaussian quasi-maximum likelihood estimator has poor finite sample properties when the data generating process has conditional excess kurtosis. Some of these poor properties appear to be asymptotic in nature.  相似文献   

8.
Modern theory for statistical hypothesis testing can broadly be classified as Bayesian or frequentist. Unfortunately, one can reach divergent conclusions if Bayesian and frequentist approaches are applied in parallel to analyze the same data set. This is a serious impasse since there is a lack of consensus on when to use one approach in detriment of the other. However, this conflict can be resolved. The present paper shows the existence of a perfect equivalence between Bayesian and frequentist methods for testing. Hence, Bayesian and frequentist decision rules can always be calibrated, in both directions, in order to present concordant results.  相似文献   

9.
The relevance weighted likelihood method was introduced by Hu and Zidek (Technical Report No. 161, Department of Statistics, The University of British Columbia, Vancouver, BC, Canada, 1995) to formally embrace a variety of statistical procedures for trading bias for precision. Their approach combines all relevant information through a weighted version of the likelihood function. The present paper is concerned with the asymptotic properties of a class of maximum weighted likelihood estimators that contains those considered by Hu and Zidek (Technical Report No. 161, Department of Statistics, The University of British Columbia, Vancouver, BC, Canada, 1995, in: Ahmed, S.E. Reid, N. (Eds.), Empirical Bayes and Likelihood Inference, Springer, New York, 2001, p. 211). Our results complement those of Hu (Can. J. Stat. 25 (1997) 45). In particular, we invoke a different asymptotic paradigm than that in Hu (Can. J. Stat. 25 (1997) 45). Moreover, our adaptive weights are allowed to depend on the data.  相似文献   

10.
We examine the finite sample properties of the maximum likelihood estimator for the binary logit model with random covariates. Previous studies have either relied on large-sample asymptotics or have assumed non-random covariates. Analytic expressions for the first-order bias and second-order mean squared error function for the maximum likelihood estimator in this model are derived, and we undertake numerical evaluations to illustrate these analytic results for the single covariate case. For various data distributions, the bias of the estimator is signed the same as the covariate’s coefficient, and both the absolute bias and the mean squared errors increase symmetrically with the absolute value of that parameter. The behaviour of a bias-adjusted maximum likelihood estimator, constructed by subtracting the (maximum likelihood) estimator of the first-order bias from the original estimator, is examined in a Monte Carlo experiment. This bias-correction is effective in all of the cases considered, and is recommended for use when this logit model is estimated by maximum likelihood using small samples.  相似文献   

11.
The method of minimum likelihood allocation (MLA) for allocating subjects to treatments in a clinical trial amounts to checking at each stage which allocation would lead an outside observer to find the least evidence of a relationship between treatment and factors of prognostic significance, assuming that the observer would use a linear exponential model. One advantage of MLA is that results from game theory and likelihood theory can be used to prove it has desirable long run properties. Two of these demonstrated here are (1) ‘consistency’, in the sense that the average likelihood ratio which measures design imbalance tends to zero, and (2) ‘efficiency’ in the sense that the variance estimates of treatment effects will tend to be minimized in the long run.  相似文献   

12.
Prediction limits for Poisson distribution are useful in real life when predicting the occurrences of some phenomena, for example, the number of infections from a disease per year among school children, or the number of hospitalizations per year among patients with cardiovascular disease. In order to allocate the right resources and to estimate the associated cost, one would want to know the worst (i.e., an upper limit) and the best (i.e., the lower limit) scenarios. Under the Poisson distribution, we construct the optimal frequentist and Bayesian prediction limits, and assess frequentist properties of the Bayesian prediction limits. We show that Bayesian upper prediction limit derived from uniform prior distribution and Bayesian lower prediction limit derived from modified Jeffreys non informative prior coincide with their respective frequentist limits. This is not the case for the Bayesian lower prediction limit derived from a uniform prior and the Bayesian upper prediction limit derived from a modified Jeffreys prior distribution. Furthermore, it is shown that not all Bayesian prediction limits derived from a proper prior can be interpreted in a frequentist context. Using a counterexample, we state a sufficient condition and show that Bayesian prediction limits derived from proper priors satisfying our condition cannot be interpreted in a frequentist context. Analysis of simulated data and data on Atlantic tropical storm occurrences are presented.  相似文献   

13.
In Hong Chang 《Statistics》2015,49(5):1095-1103
With a view to predicting a scalar-valued future observation on the basis of past observations, we explore predictive sets having frequentist as well as Bayesian validity for arbitrary priors in a higher-order asymptotic sense. It is found that a connection with locally unbiased tests is useful for this purpose. Illustrative examples are given. Computation and simulation studies lend support to our asymptotic results in finite samples. The issue of expected lengths of our predictive sets is also discussed.  相似文献   

14.
15.
It is well known that that the construction of two-sided tolerance intervals is far more challenging than that of their one-sided counterparts. In a general framework of parametric models, we derive asymptotic results leading to explicit formulae for two-sided Bayesian and frequentist tolerance intervals. In the process, probability matching priors for such intervals are characterized and their role in finding frequentist tolerance intervals via a Bayesian route is indicated. Furthermore, in situations where matching priors are hard to obtain, we develop purely frequentist tolerance intervals as well. The findings are applied to real data. Simulation studies are seen to lend support to the asymptotic results in finite samples.  相似文献   

16.
In two-sample semiparametric survival models other than the Cox proportional-hazards regression model, it is shown that partial-likelihood inference of structural parameters in the presence of fully nonpararnetric nuisance-hazards typically has relative efficiency zero compared with fuii-Iikelihood infer -ence. The practical Interpretation of efficiencies in the pres-ence of infinite-dimensional nuisance-parameters is discussed, with reference to two important examples, namely a recent sur-vival regression-model of Clayton and Cuzick and a class of additive excess-risk models. Under the excess-risk models, a formula is derived for the large-sample information [which here is the same as the limiting Fisher information when the nuisance-parameter dimension gets large] for estimating the parameter of difference between two samples, as the nuisance function becomes fully nonpararnetric.  相似文献   

17.
Robinson (1982a) presented a general approach to serial correlation in limited dependent variable models and proved the strong consistency and asymptotic normality of the quasi-maximum likelihood estimator (QMLE) for the Tobit model with serial correlation, obtained under the assumption of independent errors. This paper proves the strong consistency and asymptotic normality of the QMLE based on independent errors for the truncated regression model with serial correlation and gives consistent estimators for the limiting covariance matrix of the QMLE.  相似文献   

18.
Robinson (1982a) presented a general approach to serial correlation in limited dependent variable models and proved the strong consistency and asymptotic normality of the quasi-maximum likelihood estimator (QMLE) for the Tobit model with serial correlation, obtained under the assumption of independent errors. This paper proves the strong consistency and asymptotic normality of the QMLE based on independent errors for the truncated regression model with serial correlation and gives consistent estimators for the limiting covariance matrix of the QMLE.  相似文献   

19.
Rubin (1976 Rubin, D.B. (1976). Inference and missing data. Biometrika 63(3):581592.[Crossref], [Web of Science ®] [Google Scholar]) derived general conditions under which inferences that ignore missing data are valid. These conditions are sufficient but not generally necessary, and therefore may be relaxed in some special cases. We consider here the case of frequentist estimation of a conditional cdf subject to missing outcomes. We partition a set of data into outcome, conditioning, and latent variables, all of which potentially affect the probability of a missing response. We describe sufficient conditions under which a complete-case estimate of the conditional cdf of the outcome given the conditioning variable is unbiased. We use simulations on a renal transplant data set (Dienemann et al.) to illustrate the implications of these results.  相似文献   

20.
Synthetic likelihood is an attractive approach to likelihood-free inference when an approximately Gaussian summary statistic for the data, informative for inference about the parameters, is available. The synthetic likelihood method derives an approximate likelihood function from a plug-in normal density estimate for the summary statistic, with plug-in mean and covariance matrix obtained by Monte Carlo simulation from the model. In this article, we develop alternatives to Markov chain Monte Carlo implementations of Bayesian synthetic likelihoods with reduced computational overheads. Our approach uses stochastic gradient variational inference methods for posterior approximation in the synthetic likelihood context, employing unbiased estimates of the log likelihood. We compare the new method with a related likelihood-free variational inference technique in the literature, while at the same time improving the implementation of that approach in a number of ways. These new algorithms are feasible to implement in situations which are challenging for conventional approximate Bayesian computation methods, in terms of the dimensionality of the parameter and summary statistic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号