首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bayesian alternatives to classical tests for several testing problems are considered. One-sided and two-sided sets of hypotheses are tested concerning an exponential parameter, a Binomial proportion, and a normal mean. Hierarchical Bayes and noninformative Bayes procedures are compared with the appropriate classical procedure, either the uniformly most powerful test or the likelihood ratio test, in the different situations. The hierarchical prior employed is the conjugate prior at the first stage with the mean being the test parameter and a noninformative prior at the second stage for the hyper parameter(s) of the first stage prior. Fair comparisons are attempted in which fair means the likelihood of making a type I error is approximately the same for the different testing procedures; once this condition is satisfied, the power of the different tests are compared, the larger the power, the better the test. This comparison is difficult in the two-sided case due to the unsurprising discrepancy between Bayesian and classical measures of evidence that have been discussed for years. The hierarchical Bayes tests appear to compete well with the typical classical test in the one-sided cases.  相似文献   

2.
Abstract.  Statistical inference for exponential inhomogeneous Markov point processes by transformation is discussed. It is argued that the inhomogeneity parameter can be estimated, using a partial likelihood based on an inhomogeneous Poisson point process. The inhomogeneity parameter can thereby be estimated without taking the interaction into account, which simplifies the statistical analysis considerably. Data analysis and simulation experiments support the results.  相似文献   

3.
In this paper, we consider the distribution of life length of a series system with random number of components, say Z. Considering the distribution of Z as generalized Poisson, an exponential-generalized Poisson (EGP) distribution is developed. The generalized Poisson distribution is a generalization of the Poisson distribution having one extra parameter. The structural properties of the resulting distribution are presented and the maximum likelihood estimation of the parameters is investigated. Extensive simulation studies are carried out to study the performance of the estimates. The score test is developed to test the importance of the extra parameter. For illustration, two real data sets are examined and it is shown that the EGP model, presented here, fits better than the exponential–Poisson distribution.  相似文献   

4.
We consider the problem of detecting a ‘bump’ in the intensity of a Poisson process or in a density. We analyze two types of likelihood ratio‐based statistics, which allow for exact finite sample inference and asymptotically optimal detection: The maximum of the penalized square root of log likelihood ratios (‘penalized scan’) evaluated over a certain sparse set of intervals and a certain average of log likelihood ratios (‘condensed average likelihood ratio’). We show that penalizing the square root of the log likelihood ratio — rather than the log likelihood ratio itself — leads to a simple penalty term that yields optimal power. The thus derived penalty may prove useful for other problems that involve a Brownian bridge in the limit. The second key tool is an approximating set of intervals that is rich enough to allow for optimal detection, but which is also sparse enough to allow justifying the validity of the penalization scheme simply via the union bound. This results in a considerable simplification in the theoretical treatment compared with the usual approach for this type of penalization technique, which requires establishing an exponential inequality for the variation of the test statistic. Another advantage of using the sparse approximating set is that it allows fast computation in nearly linear time. We present a simulation study that illustrates the superior performance of the penalized scan and of the condensed average likelihood ratio compared with the standard scan statistic.  相似文献   

5.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   

6.
Likelihood ratio type test statistic and Schwarz information criterion statistics are proposed for detecting possible bathtub-shaped changes in the parameter in a sequence of exponential distributions. The asymptotic distribution of likelihood ratio type statistic under the null hypothesis and the testing procedure based on Schwarz information criterion are derived. Numerical critical values and powers of two methods are tabulated for certain selected values of the parameters. The tests are applied to detect the change points for the predator data and Stanford heart transplant data.  相似文献   

7.
Effective implementation of likelihood inference in models for high‐dimensional data often requires a simplified treatment of nuisance parameters, with these having to be replaced by handy estimates. In addition, the likelihood function may have been simplified by means of a partial specification of the model, as is the case when composite likelihood is used. In such circumstances tests and confidence regions for the parameter of interest may be constructed using Wald type and score type statistics, defined so as to account for nuisance parameter estimation or partial specification of the likelihood. In this paper a general analytical expression for the required asymptotic covariance matrices is derived, and suggestions for obtaining Monte Carlo approximations are presented. The same matrices are involved in a rescaling adjustment of the log likelihood ratio type statistic that we propose. This adjustment restores the usual chi‐squared asymptotic distribution, which is generally invalid after the simplifications considered. The practical implication is that, for a wide variety of likelihoods and nuisance parameter estimates, confidence regions for the parameters of interest are readily computable from the rescaled log likelihood ratio type statistic as well as from the Wald type and score type statistics. Two examples, a measurement error model with full likelihood and a spatial correlation model with pairwise likelihood, illustrate and compare the procedures. Wald type and score type statistics may give rise to confidence regions with unsatisfactory shape in small and moderate samples. In addition to having satisfactory shape, regions based on the rescaled log likelihood ratio type statistic show empirical coverage in reasonable agreement with nominal confidence levels.  相似文献   

8.
In this paper, we introduce a new lifetime distribution by compounding exponential and Poisson–Lindley distributions, named the exponential Poisson–Lindley (EPL) distribution. A practical situation where the EPL distribution is most appropriate for modelling lifetime data than exponential–geometric, exponential–Poisson and exponential–logarithmic distributions is presented. We obtain the density and failure rate of the EPL distribution and properties such as mean lifetime, moments, order statistics and Rényi entropy. Furthermore, estimation by maximum likelihood and inference for large samples are discussed. The paper is motivated by two applications to real data sets and we hope that this model will be able to attract wider applicability in survival and reliability.  相似文献   

9.
We investigate a generalized semiparametric regression. Such a model can avoid the risk of wrongly choosing the base measure function. We propose a profile likelihood to efficiently estimate both parameter and nonparametric function. The main difference from the classical profile likelihood is that the profile likelihood proposed is a functional of the base measure function, instead of a function of a real variable. By making the most of the structure information of the semiparametric exponential family, we get an explicit expression of the estimator of the least favorable curve. It ensures that the new profile likelihood is computationally simple. Due to the use of the least favorable curve, the semiparametric efficiency is achieved successfully and the estimation bias is reduced significantly. Simulation studies can illustrate that our proposal is much better than the existing methodologies for most cases under study, and is robust to the different model conditions.  相似文献   

10.
Consider the model of k two parameter exponential populations under a type II censoring scheme. In this paper we establish optimality in the sense of Bahadur of the likelihood ratio test for an arbitrary testing problem by requiring only Condition A in Hsieh (1979). This result unifies and generalizes the results in Samanta (1986).  相似文献   

11.
The negative binomial (NB) is frequently used to model overdispersed Poisson count data. To study the effect of a continuous covariate of interest in an NB model, a flexible procedure is used to model the covariate effect by fixed-knot cubic basis-splines or B-splines with a second-order difference penalty on the adjacent B-spline coefficients to avoid undersmoothing. A penalized likelihood is used to estimate parameters of the model. A penalized likelihood ratio test statistic is constructed for the null hypothesis of the linearity of the continuous covariate effect. When the number of knots is fixed, its limiting null distribution is the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom. The smoothing parameter value is determined by setting a specified value equal to the asymptotic expectation of the test statistic under the null hypothesis. The power performance of the proposed test is studied with simulation experiments.  相似文献   

12.
This paper compares methods of estimation for the parameters of a Pareto distribution of the first kind to determine which method provides the better estimates when the observations are censored, The unweighted least squares (LS) and the maximum likelihood estimates (MLE) are presented for both censored and uncensored data. The MLE's are obtained using two methods, In the first, called the ML method, it is shown that log-likelihood is maximized when the scale parameter is the minimum sample value. In the second method, called the modified ML (MML) method, the estimates are found by utilizing the maximum likelihood value of the shape parameter in terms of the scale parameter and the equation for the mean of the first order statistic as a function of both parameters. Since censored data often occur in applications, we study two types of censoring for their effects on the methods of estimation: Type II censoring and multiple random censoring. In this study we consider different sample sizes and several values of the true shape and scale parameters.

Comparisons are made in terms of bias and the mean squared error of the estimates. We propose that the LS method be generally preferred over the ML and MML methods for estimating the Pareto parameter γ for all sample sizes, all values of the parameter and for both complete and censored samples. In many cases, however, the ML estimates are comparable in their efficiency, so that either estimator can effectively be used. For estimating the parameter α, the LS method is also generally preferred for smaller values of the parameter (α ≤4). For the larger values of the parameter, and for censored samples, the MML method appears superior to the other methods with a slight advantage over the LS method. For larger values of the parameter α, for censored samples and all methods, underestimation can be a problem.  相似文献   

13.
Pao-sheng Shen 《Statistics》2015,49(3):602-613
For the regression parameter β in the Cox model, there have been several estimates based on different types of approximated likelihood. For right-censored data, Ren and Zhou [Full likelihood inferences in the Cox model: an empirical approach. Ann Inst Statist Math. 2011;63:1005–1018] derive the full likelihood function for (β, F0), where F0 is the baseline distribution function in the Cox model. In this article, we extend their results to left-truncated and right-censored data with discrete covariates. Using the empirical likelihood parameterization, we obtain the full-profile likelihood function for β when covariates are discrete. Simulation results indicate that the maximum likelihood estimator outperforms Cox's partial likelihood estimator in finite samples.  相似文献   

14.
This paper is concerned with testing the equality of scale parameters of K(> 2) two-parameter exponential distributions in presence of unspecified location parameters based on complete and type II censored samples. We develop a marginal likelihood ratio statistic, a quadratic statistic (Qu) (Nelson, 1982) based on maximum marginal likelihood estimates of the scale parameters under the null and the alternative hypotheses, a C(a) statistic (CPL) (Neyman, 1959) based on the profile likelihood estimate of the scale parameter under the null hypothesis and an extremal scale parameter ratio statistic (ESP) (McCool, 1979). We show that the marginal likelihood ratio statistic is equivalent to the modified Bartlett test statistic. We use Bartlett's small sample correction to the marginal likelihood ratio statistic and call it the modified marginal likelihood ratio statistic (MLB). We then compare the four statistics, MLBi Qut CPL and ESP in terms of size and power by using Monte Carlo simulation experiments. For the variety of sample sizes and censoring combinations and nominal levels considered the statistic MLB holds nominal level most accurately and based on empirically calculated critical values, this statistic performs best or as good as others in most situations. Two examples are given.  相似文献   

15.
For testing a one-sided hypothesis in a one-parameter family of distributions, it is shown that the generalized likelihood ratio (GLR) test coincides with the uniformly most powerful (UMP) test, assuming certain monotonicity properties for the likelihood function. In particular, the equivalence of GLR tests and UMP tests holds for one-parameter exponential families. In addition, the relationship between GLR and UMPU (UMP unbiased) tests is considered when testing two-sided hypotheses.  相似文献   

16.
This article presents a note on the modified likelihood ratio test for homogeneity in beta mixture models. Under consistency of the penalized maximum likelihood estimators, the limiting distribution of the test statistic converges to the chi-bar-squared distributions. The statistic degenerates to zero with a weight due to the negative definiteness of a complicated random matrix. The probability that this matrix is negative definite is related to the parameter values under the homogeneity hypothesis. The dependency pattern enables the introduction of an upper bound on the asymptotic null distribution. Simulation study is investigated to verify the accuracy of the results.  相似文献   

17.
In this paper, an asymptotic expansion of the distribution' of the likelihood ratio criterion for testing the equality of p one-parameter exponential distributions is obtained for unequal sample sizes. The expansion is obtained up to the order of n-3 with the second term of the order of n-2 so that the first term of this expansion alone should provide an excellent approximation to the distribution for moderately large values of n, where n is the combined sample size.  相似文献   

18.
Series evaluation of Tweedie exponential dispersion model densities   总被引:2,自引:0,他引:2  
Exponential dispersion models, which are linear exponential families with a dispersion parameter, are the prototype response distributions for generalized linear models. The Tweedie family comprises those exponential dispersion models with power mean-variance relationships. The normal, Poisson, gamma and inverse Gaussian distributions belong to theTweedie family. Apart from these special cases, Tweedie distributions do not have density functions which can be written in closed form. Instead, the densities can be represented as infinite summations derived from series expansions. This article describes how the series expansions can be summed in an numerically efficient fashion. The usefulness of the approach is demonstrated, but full machine accuracy is shown not to be obtainable using the series expansion method for all parameter values. Derivatives of the density with respect to the dispersion parameter are also derived to facilitate maximum likelihood estimation. The methods are demonstrated on two data examples and compared with with Box-Cox transformations and extended quasi-likelihoood.  相似文献   

19.
Empirical Likelihood for Censored Linear Regression   总被引:5,自引:0,他引:5  
In this paper we investigate the empirical likelihood method in a linear regression model when the observations are subject to random censoring. An empirical likelihood ratio for the slope parameter vector is defined and it is shown that its limiting distribution is a weighted sum of independent chi-square distributions. This reduces to the empirical likelihood to the linear regression model first studied by Owen (1991) if there is no censoring present. Some simulation studies are presented to compare the empirical likelihood method with the normal approximation based method proposed in Lai et al. (1995). It was found that the empirical likelihood method performs much better than the normal approximation method.  相似文献   

20.
For testing a scalar interest parameter in a large sample asymptotic context, methods with third-order accuracy are now available that make a reduction to the simple case having a scalar parameter and scalar variable. For such simple models on the real line, we develop canonical versions that correspond closely to an exponential model and to a location model; these canonical versions are obtained by standardizing and reexpressing the variable and the parameters, the needed steps being given in algorithmic form. The exponential and location approximations have three parameters, two corresponding to the pure-model type and one for departure from that type. We also record the connections among the common test quantities: the signed likelihood departure, the standardized score variable, and the location-scale corrected signed likelihood ratio. These connections are for fixed data point and would bear on the effectiveness of the quantities for inference with the particular data; an earlier paper recorded the connections for fixed parameter value, and would bear on distributional properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号