首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The likelihood ratio is used for measuring the strength of statistical evidence. The probability of observing strong misleading evidence along with that of observing weak evidence evaluate the performance of this measure. When the corresponding likelihood function is expressed in terms of a parametric statistical model that fails, the likelihood ratio retains its evidential value if the likelihood function is robust [Royall, R., Tsou, T.S., 2003. Interpreting statistical evidence by using imperfect models: robust adjusted likelihood functions. J. Roy. Statist. Soc. Ser. B 65, 391–404]. In this paper, we extend the theory of Royall and Tsou [2003. Interpreting statistical evidence by using imperfect models: robust adjusted likelihood functions. J. Roy. Statist. Soc., Ser. B 65, 391–404] to the case when the assumed working model is a characteristic model for two-way contingency tables (the model of independence, association and correlation models). We observe that association and correlation models are not equivalent in terms of statistical evidence. The association models are bounded by the maximum of the bump function while the correlation models are not.  相似文献   

2.
How often would investigators be misled if they took advantage of the likelihood principle and used likelihood ratios—which need not be adjusted for multiple looks at the data—to frequently examine accumulating data? The answer, perhaps surprisingly, is not often. As expected, the probability of observing misleading evidence does increase with each additional examination. However, the amount by which this probability increases converges to zero as the sample size grows. As a result, the probability of observing misleading evidence remains bounded—and therefore controllable—even with an infinite number of looks at the data. Here we use boundary crossing results to detail how often misleading likelihood ratios arise in sequential designs. We find that the probability of observing a misleading likelihood ratio is often much less than its universal bound. Additionally, we find that in the presence of fixed-dimensional nuisance parameters, profile likelihoods are to be preferred over estimated likelihoods which result from replacing the nuisance parameters by their global maximum likelihood estimates.  相似文献   

3.
This article introduces a parametric robust way of determining the mean-variance relationship in the setting of generalized linear models. More specifically, the normal likelihood is properly amended to become asymptotically valid even if normality fails. Consequently, legitimate inference for the parametric relationship between mean and variance could be derived under model misspecification. More details are given to the scenario when the variance is proportional to an unknown power of the mean function. The efficacy of the novel technique is demonstrated via simulations and the analysis of two real data sets.  相似文献   

4.
It is shown that linear transformations of the logarithm are the only functions of the likelihood whose expected values discriminate between correct and incorrect likelihoods by a simple ordering property, assuming the correct probability density function is continuous. Also, an extension of this result is given for the predictive densities considered by Akaike.  相似文献   

5.
One important type of question in statistical inference is how to interpret data as evidence. The law of likelihood provides a satisfactory answer in interpreting data as evidence for simple hypotheses, but remains silent for composite hypotheses. This article examines how the law of likelihood can be extended to composite hypotheses within the scope of the likelihood principle. From a system of axioms, we conclude that the strength of evidence for the composite hypotheses should be represented by an interval between lower and upper profiles likelihoods. This article is intended to reveal the connection between profile likelihoods and the law of likelihood under the likelihood principle rather than argue in favor of the use of profile likelihoods in addressing general questions of statistical inference. The interpretation of the result is also discussed.  相似文献   

6.
Biao Zhang 《Statistics》2016,50(5):1173-1194
Missing covariate data occurs often in regression analysis. We study methods for estimating the regression coefficients in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Amer Statist Assoc. 1994;89:846–866] on regression analyses with missing covariates, in which they pioneered the use of two working models, the working propensity score model and the working conditional score model. A recent approach to missing covariate data analysis is the empirical likelihood method of Qin et al. [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503], which effectively combines unbiased estimating equations. In this paper, we consider an alternative likelihood approach based on the full likelihood of the observed data. This full likelihood-based method enables us to generate estimators for the vector of the regression coefficients that are (a) asymptotically equivalent to those of Qin et al. [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503] when the working propensity score model is correctly specified, and (b) doubly robust, like the augmented inverse probability weighting (AIPW) estimators of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Am Statist Assoc. 1994;89:846–866]. Thus, the proposed full likelihood-based estimators improve on the efficiency of the AIPW estimators when the working propensity score model is correct but the working conditional score model is possibly incorrect, and also improve on the empirical likelihood estimators of Qin, Zhang and Leung [Empirical likelihood in missing data problems. J Amer Statist Assoc. 2009;104:1492–1503] when the reverse is true, that is, the working conditional score model is correct but the working propensity score model is possibly incorrect. In addition, we consider a regression method for estimation of the regression coefficients when the working conditional score model is correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. [Estimation of regression coefficients when some regressors are not always observed. J Amer Statist Assoc. 1994;89:846–866]. Finally, we compare the finite-sample performance of various estimators in a simulation study.  相似文献   

7.
Abstract

This article introduces a parametric robust way of comparing two population means and two population variances. With large samples the comparison of two means, under model misspecification, is lesser a problem, for, the validity of inference is protected by the central limit theorem. However, the assumption of normality is generally required, so that the inference for the ratio of two variances can be carried out by the familiar F statistic. A parametric robust approach that is insensitive to the distributional assumption will be proposed here. More specifically, it will be demonstrated that the normal likelihood function can be adjusted for asymptotically valid inferences for all underlying distributions with finite fourth moments. The normal likelihood function, on the other hand, is itself robust for the comparison of two means so that no adjustment is needed.  相似文献   

8.
《Statistics》2012,46(6):1187-1209
ABSTRACT

According to the general law of likelihood, the strength of statistical evidence for a hypothesis as opposed to its alternative is the ratio of their likelihoods, each maximized over the parameter of interest. Consider the problem of assessing the weight of evidence for each of several hypotheses. Under a realistic model with a free parameter for each alternative hypothesis, this leads to weighing evidence without any shrinkage toward a presumption of the truth of each null hypothesis. That lack of shrinkage can lead to many false positives in settings with large numbers of hypotheses. A related problem is that point hypotheses cannot have more support than their alternatives. Both problems may be solved by fusing the realistic model with a model of a more restricted parameter space for use with the general law of likelihood. Applying the proposed framework of model fusion to data sets from genomics and education yields intuitively reasonable weights of evidence.  相似文献   

9.
The joint probability density function, evaluated at the observed data, is commonly used as the likelihood function to compute maximum likelihood estimates. For some models, however, there exist paths in the parameter space along which this density-approximation likelihood goes to infinity and maximum likelihood estimation breaks down. In all applications, however, observed data are really discrete due to the round-off or grouping error of measurements. The “correct likelihood” based on interval censoring can eliminate the problem of an unbounded likelihood. This article categorizes the models leading to unbounded likelihoods into three groups and illustrates the density-approximation breakdown with specific examples. Although it is usually possible to infer how given data were rounded, when this is not possible, one must choose the width for interval censoring, so we study the effect of the round-off on estimation. We also give sufficient conditions for the joint density to provide the same maximum likelihood estimate as the correct likelihood, as the round-off error goes to zero.  相似文献   

10.
Multivariate normal, due to its well-established theories, is commonly utilized to analyze correlated data of various types. However, the validity of the resultant inference is, more often than not, erroneous if the model assumption fails. We present a modification for making the multivariate normal likelihood acclimatize itself to general correlated data. The modified likelihood is asymptotically legitimate for any true underlying joint distributions so long as they have finite second moments. One can, hence, acquire full likelihood inference without knowing the true random mechanisms underlying the data. Simulations and real data analysis are provided to demonstrate the merit of our proposed parametric robust method.  相似文献   

11.
This article analyses diffusion-type processes from a new point-of-view. Consider two statistical hypotheses on a diffusion process. We do not use a classical test to reject or accept one hypothesis using the Neyman–Pearson procedure and do not involve Bayesian approach. As an alternative, we propose using a likelihood paradigm to characterizing the statistical evidence in support of these hypotheses. The method is based on evidential inference introduced and described by Royall [Royall R. Statistical evidence: a likelihood paradigm. London: Chapman and Hall; 1997]. In this paper, we extend the theory of Royall to the case when data are observations from a diffusion-type process instead of iid observations. The empirical distribution of likelihood ratio is used to formulate the probability of strong, misleading and weak evidences. Since the strength of evidence can be affected by the sampling characteristics, we present a simulation study that demonstrates these effects. Also we try to control misleading evidence and reduce them by adjusting these characteristics. As an illustration, we apply the method to the Microsoft stock prices.  相似文献   

12.
We examine the asymptotic and small sample properties of model-based and robust tests of the null hypothesis of no randomized treatment effect based on the partial likelihood arising from an arbitrarily misspecified Cox proportional hazards model. When the distribution of the censoring variable is either conditionally independent of the treatment group given covariates or conditionally independent of covariates given the treatment group, the numerators of the partial likelihood treatment score and Wald tests have asymptotic mean equal to 0 under the null hypothesis, regardless of whether or how the Cox model is misspecified. We show that the model-based variance estimators used in the calculation of the model-based tests are not, in general, consistent under model misspecification, yet using analytic considerations and simulations we show that their true sizes can be as close to the nominal value as tests calculated with robust variance estimators. As a special case, we show that the model-based log-rank test is asymptotically valid. When the Cox model is misspecified and the distribution of censoring depends on both treatment group and covariates, the asymptotic distributions of the resulting partial likelihood treatment score statistic and maximum partial likelihood estimator do not, in general, have a zero mean under the null hypothesis. Here neither the fully model-based tests, including the log-rank test, nor the robust tests will be asymptotically valid, and we show through simulations that the distortion to test size can be substantial.  相似文献   

13.
We propose a universal robust likelihood that is able to accommodate correlated binary data without any information about the underlying joint distributions. This likelihood function is asymptotically valid for the regression parameter for any underlying correlation configurations, including varying under- or over-dispersion situations, which undermines one of the regularity conditions ensuring the validity of crucial large sample theories. This robust likelihood procedure can be easily implemented by using any statistical software that provides naïve and sandwich covariance matrices for regression parameter estimates. Simulations and real data analyses are used to demonstrate the efficacy of this parametric robust method.  相似文献   

14.
In this paper we consider the problem of estimating the locations of several normal populations when an order relation between them is known to be true. We compare the maximum likelihood estimator, the M-estimators based on Huber’s ψ function, a robust weighted likelihood estimator, the Gastworth estimator and the trimmed mean estimator. A Monte-Carlo study illustrates the performance of the methods considered.  相似文献   

15.
Tsou (2003a) proposed a parametric procedure for making robust inference for mean regression parameters in the context of generalized linear models. This robust procedure is extended to model variance heterogeneity. The normal working model is adjusted to become asymptotically robust for inference about regression parameters of the variance function for practically all continuous response variables. The connection between the novel robust variance regression model and the estimating equations approach is also provided.  相似文献   

16.
In this article, we propose two novel diagnostic measures for the deletion of influential observations for regression parameters in the setting of generalized linear models. The proposed diagnostic methods are capable for detecting the influential observations under model misspecification, as long as the true underlying distributions have finite second moments.More specifically, it is demonstrated that the Poisson likelihood function can be properly adjusted to become asymptotically valid for practically all underlying discrete distributions. The adjusted Poisson regression model that achieves the robustness property is presented. Simulation studies and an illustration are performed to demonstrate the efficacy of the two novel diagnostic procedures.  相似文献   

17.
When tables contain missing values, statistical models that allow the non-response probability to be a function of the intended response have been proposed by several researchers. We investigate the properties of these methods in the context of a survey of the sexual behaviour of university students. Profile likelihoods can be computed, even when models are not identified and saturated profile likelihoods (making no assumptions about the non-response mechanism) are derived. Bayesian approaches are investigated and it is shown that their results may be highly sensitive to the prior specification. The proportion of responders answering 'yes' to the question 'have you ever had sexual intercourse?' was 73%. However, different assumptions about the non-responders gave proportions as low as 46% or as high as 83%. Our preferred estimate, derived from the response-saturated profile likelihood, is 67% with a 95% confidence interval of 58–74%. This is in line with other studies on response bias in the reports of young people's sexual behaviour that suggest that the respondents overrepresent the sexually active.  相似文献   

18.
Sieve Empirical Likelihood and Extensions of the Generalized Least Squares   总被引:1,自引:0,他引:1  
The empirical likelihood cannot be used directly sometimes when an infinite dimensional parameter of interest is involved. To overcome this difficulty, the sieve empirical likelihoods are introduced in this paper. Based on the sieve empirical likelihoods, a unified procedure is developed for estimation of constrained parametric or non-parametric regression models with unspecified error distributions. It shows some interesting connections with certain extensions of the generalized least squares approach. A general asymptotic theory is provided. In the parametric regression setting it is shown that under certain regularity conditions the proposed estimators are asymptotically efficient even if the restriction functions are discontinuous. In the non-parametric regression setting the convergence rate of the maximum estimator based on the sieve empirical likelihood is given. In both settings, it is shown that the estimator is adaptive for the inhomogeneity of conditional error distributions with respect to predictor, especially for heteroscedasticity.  相似文献   

19.
Bayes methodology provides posterior distribution functions based on parametric likelihoods adjusted for prior distributions. A distribution-free alternative to the parametric likelihood is use of empirical likelihood (EL) techniques, well known in the context of nonparametric testing of statistical hypotheses. Empirical likelihoods have been shown to exhibit many of the properties of conventional parametric likelihoods. In this paper, we propose and examine Bayes factors (BF) methods that are derived via the EL ratio approach. Following Kass and Wasserman (1995), we consider Bayes factors type decision rules in the context of standard statistical testing techniques. We show that the asymptotic properties of the proposed procedure are similar to the classical BF's asymptotic operating characteristics. Although we focus on hypothesis testing, the proposed approach also yields confidence interval estimators of unknown parameters. Monte Carlo simulations were conducted to evaluate the theoretical results as well as to demonstrate the power of the proposed test.  相似文献   

20.
In this article, the parametric robust regression approaches are proposed for making inferences about regression parameters in the setting of generalized linear models (GLMs). The proposed methods are able to test hypotheses on the regression coefficients in the misspecified GLMs. More specifically, it is demonstrated that with large samples, the normal and gamma regression models can be properly adjusted to become asymptotically valid for inferences about regression parameters under model misspecification. These adjusted regression models can provide the correct type I and II error probabilities and the correct coverage probability for continuous data, as long as the true underlying distributions have finite second moments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号