首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the problem of parametric statistical inference with a finite parameter space, we propose some simple rules for defining posterior upper and lower probabilities directly from the observed likelihood function, without using any prior information. The rules satisfy the likelihood principle and a basic consistency principle ('avoiding sure loss'), they produce vacuous inferences when the likelihood function is constant, and they have other symmetry, monotonicity and continuity properties. One of the rules also satisfies fundamental frequentist principles. The rules can be used to eliminate nuisance parameters, and to interpret the likelihood function and to use it in making decisions. To compare the rules, they are applied to the problem of sampling from a finite population. Our results indicate that there are objective statistical methods which can reconcile three general approaches to statistical inference: likelihood inference, coherent inference and frequentist inference.  相似文献   

2.
While applying theclassical maximum likelihood method for a certain statistical inference problem, Smith and Weissman [5] have noted that there are conditions under which the likelihood function may be unbounded above or may not possess local maximizers. Ariyawansà and Templeton [1] have derived inference procedures for this problem using the theory of structural inference [2,3,4]. Based on numerical experience, and without proof, they state that the resulting likelihood functions possess unique, global maximizers, even in instances where the classical maximum likelihood method fails in the above sense. In this paper, we prove that under quite mild conditions, these likelihood functions that result from the application of the theory of structural inference are well-behaved, and possess unique, global maximizers. This research was supported in part by the Applied Mathematical Sciences subprogram of the U.S. Department of Energy under contract W-31-109-Eng-38 while the author was visiting the Mathematics and Computer Science Division of Argonne National Laboratory, Argonne, Illinois.  相似文献   

3.
Synthetic likelihood is an attractive approach to likelihood-free inference when an approximately Gaussian summary statistic for the data, informative for inference about the parameters, is available. The synthetic likelihood method derives an approximate likelihood function from a plug-in normal density estimate for the summary statistic, with plug-in mean and covariance matrix obtained by Monte Carlo simulation from the model. In this article, we develop alternatives to Markov chain Monte Carlo implementations of Bayesian synthetic likelihoods with reduced computational overheads. Our approach uses stochastic gradient variational inference methods for posterior approximation in the synthetic likelihood context, employing unbiased estimates of the log likelihood. We compare the new method with a related likelihood-free variational inference technique in the literature, while at the same time improving the implementation of that approach in a number of ways. These new algorithms are feasible to implement in situations which are challenging for conventional approximate Bayesian computation methods, in terms of the dimensionality of the parameter and summary statistic.  相似文献   

4.
We consider the semiparametric profile likelihood inference for the distribution function under doubly censored data. For further developments of the statistical inference based on the profile likelihood ratio and alternative tools such as the score or Wald-type inference, we discuss the structures of the profile likelihood estimators and their derivatives included in the score function and the Fisher function of the profile likelihood, establishing the consistencies of their estimators.  相似文献   

5.
The aim of this paper is to investigate the robustness properties of likelihood inference with respect to rounding effects. Attention is focused on exponential families and on inference about a scalar parameter of interest, also in the presence of nuisance parameters. A summary value of the influence function of a given statistic, the local-shift sensitivity, is considered. It accounts for small fluctuations in the observations. The main result is that the local-shift sensitivity is bounded for the usual likelihood-based statistics, i.e. the directed likelihood, the Wald and score statistics. It is also bounded for the modified directed likelihood, which is a higher-order adjustment of the directed likelihood. The practical implication is that likelihood inference is expected to be robust with respect to rounding effects. Theoretical analysis is supplemented and confirmed by a number of Monte Carlo studies, performed to assess the coverage probabilities of confidence intervals based on likelihood procedures when data are rounded. In addition, simulations indicate that the directed likelihood is less sensitive to rounding effects than the Wald and score statistics. This provides another criterion for choosing among first-order equivalent likelihood procedures. The modified directed likelihood shows the same robustness as the directed likelihood, so that its gain in inferential accuracy does not come at the price of an increase in instability with respect to rounding.  相似文献   

6.
Likelihood     
This article reviews some recent work on large-sample likelihood inference, in particular work related to conditional inference and ancillary statistics. The background discussion includes a relation between conditional inference and testing for model fit. There are brief comments on results for sequential estimation, also on the difficulties associated with multiparameter problems.  相似文献   

7.
We propose a weighted empirical likelihood approach to inference with multiple samples, including stratified sampling, the estimation of a common mean using several independent and non-homogeneous samples and inference on a particular population using other related samples. The weighting scheme and the basic result are motivated and established under stratified sampling. We show that the proposed method can ideally be applied to the common mean problem and problems with related samples. The proposed weighted approach not only provides a unified framework for inference with multiple samples, including two-sample problems, but also facilitates asymptotic derivations and computational methods. A bootstrap procedure is also proposed in conjunction with the weighted approach to provide better coverage probabilities for the weighted empirical likelihood ratio confidence intervals. Simulation studies show that the weighted empirical likelihood confidence intervals perform better than existing ones.  相似文献   

8.
The structural approach of inference for the parameters of a simultaneous equation model with heteroscedastic error variance is investigated in this paper. The joint and the marginal structural distributions for the coefficients of the exogenous variables and the scale parameters of the error variables, and the marginal likelihood function of the coefficients of the endogenous variables have been derived. The estimates are directly obtainable from the structural distribution and the marginal likelihood function of the parameters. The marginal distribution of a subset of coefficients of exogenous variables provides the basis for making inference for a particular subset of parameter of interest.  相似文献   

9.
The Birnbaum-Saunders regression model is becoming increasingly popular in lifetime analyses and reliability studies. In this model, the signed likelihood ratio statistic provides the basis for testing inference and construction of confidence limits for a single parameter of interest. We focus on the small sample case, where the standard normal distribution gives a poor approximation to the true distribution of the statistic. We derive three adjusted signed likelihood ratio statistics that lead to very accurate inference even for very small samples. Two empirical applications are presented.  相似文献   

10.
We consider inference for queues based on inter-departure time data. Calculating the likelihood for such models is difficult, as the likelihood involves summing up over the (exponentially-large) space of realisations of the arrival process. We demonstrate how a likelihood recursion can be used to calculate this likelihood efficiently for the specific cases of M/G/1 and Er/G/1 queues. We compare the sampling properties of the mles to the sampling properties of estimators, based on indirect inference, which have previously been suggested for this problem.  相似文献   

11.
Multivariate normal, due to its well-established theories, is commonly utilized to analyze correlated data of various types. However, the validity of the resultant inference is, more often than not, erroneous if the model assumption fails. We present a modification for making the multivariate normal likelihood acclimatize itself to general correlated data. The modified likelihood is asymptotically legitimate for any true underlying joint distributions so long as they have finite second moments. One can, hence, acquire full likelihood inference without knowing the true random mechanisms underlying the data. Simulations and real data analysis are provided to demonstrate the merit of our proposed parametric robust method.  相似文献   

12.
One important type of question in statistical inference is how to interpret data as evidence. The law of likelihood provides a satisfactory answer in interpreting data as evidence for simple hypotheses, but remains silent for composite hypotheses. This article examines how the law of likelihood can be extended to composite hypotheses within the scope of the likelihood principle. From a system of axioms, we conclude that the strength of evidence for the composite hypotheses should be represented by an interval between lower and upper profiles likelihoods. This article is intended to reveal the connection between profile likelihoods and the law of likelihood under the likelihood principle rather than argue in favor of the use of profile likelihoods in addressing general questions of statistical inference. The interpretation of the result is also discussed.  相似文献   

13.
We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modelling continuous proportions that are affected by independent variables. We derive small-sample adjustments to the likelihood ratio statistic in this class of models. The adjusted statistics can be easily implemented from standard statistical software. We present Monte Carlo simulations showing that inference based on the adjusted statistics we propose is much more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.  相似文献   

14.
Estimation and inference for dependent trials are important issues in response-adaptive allocation designs; maximum likelihood estimation is one route of interest. We present three noval response-driven designs and derive their maximum likelihood estimators. We also provide convenient regularity conditions that ensure the maximum likelihood estimator from a multiparameter stochastic process exists and is asymptotically multivariate normal. While these conditions may not be the most general, they are easily verified for our applications.  相似文献   

15.
This paper surveys asymptotic theory of maximum likelihood estimation for not identically distributed, possibly dependent observations. Main results on consistency, asymptotic normality and efficiency are stated within a unified framework. Limiting distributions of the likelihood ratio, Wald and score statistics for composite hypotheses are obtained under the same conditions by a generalization of existing theory. Modifications for maximum likelihood estimation under misspecification, containing the results for correctly specified models, are presented, and extensions to likelihood inference in the presence of nuisance parameters are indicated.  相似文献   

16.
The generalized likelihood plays an important role in parametric inference for prediction and empirical Bayesian models. This paper emphasizes the utility of the generalized likelihood as a summarization procedure in general prediction models. Properties of the generalized likelihood when used in this setting, and examples of its use as a data analytic tool are given in a series of numerical examples.  相似文献   

17.
In this paper, we introduce the empirical likelihood (EL) method to longitudinal studies. By considering the dependence within subjects in the auxiliary random vectors, we propose a new weighted empirical likelihood (WEL) inference for generalized linear models with longitudinal data. We show that the weighted empirical likelihood ratio always follows an asymptotically standard chi-squared distribution no matter which working weight matrix that we have chosen, but a well chosen working weight matrix can improve the efficiency of statistical inference. Simulations are conducted to demonstrate the accuracy and efficiency of our proposed WEL method, and a real data set is used to illustrate the proposed method.  相似文献   

18.
We address the issue of performing inference on the parameters that index the modified extended Weibull (MEW) distribution. We show that numerical maximization of the MEW log-likelihood function can be problematic. It is even possible to encounter maximum likelihood estimates that are not finite, that is, it is possible to encounter monotonic likelihood functions. We consider different penalization schemes to improve maximum likelihood point estimation. A penalization scheme based on the Jeffreys’ invariant prior is shown to be particularly useful. Simulation results on point estimation, interval estimation, and hypothesis testing inference are presented. Two empirical applications are presented and discussed.  相似文献   

19.
We consider the combination of path sampling and perfect simulation in the context of both likelihood inference and non‐parametric Bayesian inference for pairwise interaction point processes. Several empirical results based on simulations and analysis of a data set are presented, and the merits of using perfect simulation are discussed.  相似文献   

20.
Shibin Zhang  Xuming He 《Statistics》2016,50(3):667-688
Probability transform-based inference, for example, characteristic function-based inference, is a good alternative to likelihood methods when the probability density function is unavailable or intractable. However, a set of grids needs to be determined to provide an effective estimator based on probability transforms. This paper is concerned with parametric inference based on adaptive selection of grids. By employing a closeness measure to evaluate the asymptotic variance of the transform-based estimator, we propose a statistical inference procedure, accompanied with adaptive grid selection. The selection algorithm aims for a small set of grids, and yet the resulting estimator can be highly efficient. Generally, the asymptotic variance is very close to that of the maximum likelihood estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号