首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1–39.], and (ii) an approximation to the one proposed by Barndorff–Nielsen [Barndorff–Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343–365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33–53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655–661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff–Nielsen's adjustment.  相似文献   

2.
R.A. Fisher (1957) discussed conditional inference for a spherical normal with mean restricted to a circle. Related statistical models have been discussed by Fraser (1968, 1979). This paper develops statistical inference for a spherical normal with mean restricted to a sphere-cylinder embedded in Rn. This material forms the basis for analyzing non-linear least squares generally, and obtaining second-order data-sensitive conditional procedures.  相似文献   

3.
Models that involve an outcome variable, covariates, and latent variables are frequently the target for estimation and inference. The presence of missing covariate or outcome data presents a challenge, particularly when missingness depends on the latent variables. This missingness mechanism is called latent ignorable or latent missing at random and is a generalisation of missing at random. Several authors have previously proposed approaches for handling latent ignorable missingness, but these methods rely on prior specification of the joint distribution for the complete data. In practice, specifying the joint distribution can be difficult and/or restrictive. We develop a novel sequential imputation procedure for imputing covariate and outcome data for models with latent variables under latent ignorable missingness. The proposed method does not require a joint model; rather, we use results under a joint model to inform imputation with less restrictive modelling assumptions. We discuss identifiability and convergence‐related issues, and simulation results are presented in several modelling settings. The method is motivated and illustrated by a study of head and neck cancer recurrence. Imputing missing data for models with latent variables under latent‐dependent missingness without specifying a full joint model.  相似文献   

4.
This paper presents a method for estimating likelihood ratios for stochastic compartment models when only times of removals from a population are observed. The technique operates by embedding the models in a composite model parameterised by an integer k which identifies a switching time when dynamics change from one model to the other. Likelihood ratios can then be estimated from the posterior density of k using Markov chain methods. The techniques are illustrated by a simulation study involving an immigration-death model and validated using analytic results derived for this case. They are also applied to compare the fit of stochastic epidemic models to historical data on a smallpox epidemic. In addition to estimating likelihood ratios, the method can be used for direct estimation of likelihoods by selecting one of the models in the comparison to have a known likelihood for the observations. Some general properties of the likelihoods typically arising in this scenario, and their implications for inference, are illustrated and discussed.  相似文献   

5.
Summary In the literature on encompassing [see e.g. Mizon-Richard (1986), Hendry-Richard (1990), Florens-Hendry-Richard (1987)] there is a basic contradiction: on the one hand it is said that it is not possible to assume that the true distribution belongs to one of two competing modelM 1 andM 2, but, on the other hand, this assumption is made in the study of encompassing tests. In this paper we first propose a formal definition of encompassing, we then briefly examine the properties of this notion and we propose encompassing tests which do not assume that the true distribution belongs toM 1 orM 2; these tests are based on simulations. Finally, generalizing an idea used in the definition of an encompassing test (the GET test) we propose a new kind of inference, called indirect inference, which allows for estimation and test procedures when the model is too complicated to be treated by usual methods (for instance maximum likelihood methods); the only assumption made on the model is that it can be simulated, which seems to be a minimal requirement. This new class of inference methods can be used in a large number of domains and some examples are given. The present paper is based on Gouriéroux-Monfort (1992), and Gouriéroux-Monfort-Renault (1993), respectively GM and GMR hereafter. Invited paper at the Conference on ?Statistical Tests: Methodology and Econometric Applications?, held in Bologna, Italy, 27–28 May 1993.  相似文献   

6.
Consider panel data modelled by a linear random intercept model that includes a time‐varying covariate. Suppose that our aim is to construct a confidence interval for the slope parameter. Commonly, a Hausman pretest is used to decide whether this confidence interval is constructed using the random effects model or the fixed effects model. This post‐model‐selection confidence interval has the attractive features that it (a) is relatively short when the random effects model is correct and (b) reduces to the confidence interval based on the fixed effects model when the data and the random effects model are highly discordant. However, this confidence interval has the drawbacks that (i) its endpoints are discontinuous functions of the data and (ii) its minimum coverage can be far below its nominal coverage probability. We construct a new confidence interval that possesses these attractive features, but does not suffer from these drawbacks. This new confidence interval provides an intermediate between the post‐model‐selection confidence interval and the confidence interval obtained by always using the fixed effects model. The endpoints of the new confidence interval are smooth functions of the Hausman test statistic, whereas the endpoints of the post‐model‐selection confidence interval are discontinuous functions of this statistic.  相似文献   

7.
Based on a generalized cumulative damage approach with a stochastic process describing degradation, new accelerated life test models are presented in which both observed failures and degradation measures can be considered for parametric inference of system lifetime. Incorporating an accelerated test variable, we provide several new accelerated degradation models for failure based on the geometric Brownian motion or gamma process. It is shown that in most cases, our models for failure can be approximated closely by accelerated test versions of Birnbaum–Saunders and inverse Gaussian distributions. Estimation of model parameters and a model selection procedure are discussed, and two illustrative examples using real data for carbon-film resistors and fatigue crack size are presented.  相似文献   

8.
Two or more regression models are said to be non-nested if neither can be obtained from the remaining models when parametric restrictions are imposed. Tests for choosing between linear non-nested regression models are found in literature, such as J and MJ tests. In this paper we propose variants of these two tests for the GAMLSS (Generalized Additive Models for Location, Scale and Shape) class of models. We report Monte Carlo evidence on finite sample behaviour of the proposed tests. Bootstrap-based testing inference is also considered. Overall, bootstrap MJ test had the best performance. An empirical application is presented and discussed.  相似文献   

9.
Just as frequentist hypothesis tests have been developed to check model assumptions, prior predictive p-values and other Bayesian p-values check prior distributions as well as other model assumptions. These model checks not only suffer from the usual threshold dependence of p-values, but also from the suppression of model uncertainty in subsequent inference. One solution is to transform Bayesian and frequentist p-values for model assessment into a fiducial distribution across the models. Averaging the Bayesian or frequentist posterior distributions with respect to the fiducial distribution can reproduce results from Bayesian model averaging or classical fiducial inference.  相似文献   

10.
In this paper, we extend the work of Gjestvang and Singh [A new randomized response model, J. R. Statist. Soc. Ser. B (Methodological) 68 (2006), pp. 523–530] to propose a new unrelated question randomized response model that can be used for any sampling scheme. The interesting thing is that the estimator based on one sample is free from the use of known proportion of an unrelated character, unlike Horvitz et al. [The unrelated question randomized response model, Social Statistics Section, Proceedings of the American Statistical Association, 1967, pp. 65–72], Greenberg et al. [The unrelated question randomized response model: Theoretical framework, J. Amer. Statist. Assoc. 64 (1969), pp. 520–539] and Mangat et al. [An improved unrelated question randomized response strategy, Calcutta Statist. Assoc. Bull. 42 (1992), pp. 167–168] models. The relative efficiency of the proposed model with respect to the existing competitors has been studied.  相似文献   

11.
Maximality of ancillaries is important in both conditional inference without nuisance parameters and marginal inference with nuisance parameters. We extend results of Basu (1959) to the more classical former case and discuss the different nature of ancillaries in these two contexts. We apply the results to a general ancillary independent of a sufficient statistic. Finally, we discuss difficulties in finding necessary conditions for maximality.  相似文献   

12.
In this paper, a likelihood based analysis is developed and applied to obtain confidence intervals and p values for the stress-strength reliability R  =  P(X  <  Y) with right truncated exponentially distributed data. The proposed method is based on theory given in Fraser et al. (Biometrika 86:249–264, 1999) which involves implicit but appropriate conditioning and marginalization. Monte Carlo simulations are used to illustrate the accuracy of the proposed method.  相似文献   

13.
Consider the problem of inference about a parameter θ in the presence of a nuisance parameter v. In a Bayesian framework, a number of posterior distributions may be of interest, including the joint posterior of (θ, ν), the marginal posterior of θ, and the posterior of θ conditional on different values of ν. The interpretation of these various posteriors is greatly simplified if a transformation (θ, h(θ, ν)) can be found so that θ and h(θ, v) are approximately independent. In this article, we consider a graphical method for finding this independence transformation, motivated by techniques from exploratory data analysis. Some simple examples of the use of this method are given and some of the implications of this approximate independence in a Bayesian analysis are discussed.  相似文献   

14.
This paper considers a hierarchical Bayesian analysis of regression models using a class of Gaussian scale mixtures. This class provides a robust alternative to the common use of the Gaussian distribution as a prior distribution in particular for estimating the regression function subject to uncertainty about the constraint. For this purpose, we use a family of rectangular screened multivariate scale mixtures of Gaussian distribution as a prior for the regression function, which is flexible enough to reflect the degrees of uncertainty about the functional constraint. Specifically, we propose a hierarchical Bayesian regression model for the constrained regression function with uncertainty on the basis of three stages of a prior hierarchy with Gaussian scale mixtures, referred to as a hierarchical screened scale mixture of Gaussian regression models (HSMGRM). We describe distributional properties of HSMGRM and an efficient Markov chain Monte Carlo algorithm for posterior inference, and apply the proposed model to real applications with constrained regression models subject to uncertainty.  相似文献   

15.
Linear mixed models are widely used when multiple correlated measurements are made on each unit of interest. In many applications, the units may form several distinct clusters, and such heterogeneity can be more appropriately modelled by a finite mixture linear mixed model. The classical estimation approach, in which both the random effects and the error parts are assumed to follow normal distribution, is sensitive to outliers, and failure to accommodate outliers may greatly jeopardize the model estimation and inference. We propose a new mixture linear mixed model using multivariate t distribution. For each mixture component, we assume the response and the random effects jointly follow a multivariate t distribution, to conveniently robustify the estimation procedure. An efficient expectation conditional maximization algorithm is developed for conducting maximum likelihood estimation. The degrees of freedom parameters of the t distributions are chosen data adaptively, for achieving flexible trade-off between estimation robustness and efficiency. Simulation studies and an application on analysing lung growth longitudinal data showcase the efficacy of the proposed approach.  相似文献   

16.
The problem of inference in Bayesian Normal mixture models is known to be difficult. In particular, direct Bayesian inference (via quadrature) suffers from a combinatorial explosion in having to consider every possible partition of n observations into k mixture components, resulting in a computation time which is O(k n). This paper explores the use of discretised parameters and shows that for equal-variance mixture models, direct computation time can be reduced to O(D k n k), where relevant continuous parameters are each divided into D regions. As a consequence, direct inference is now possible on genuine data sets for small k, where the quality of approximation is determined by the level of discretisation. For large problems, where the computational complexity is still too great in O(D k n k) time, discretisation can provide a convergence diagnostic for a Markov chain Monte Carlo analysis.  相似文献   

17.
The class of beta regression models proposed by Ferrari and Cribari-Neto [Beta regression for modelling rates and proportions, Journal of Applied Statistics 31 (2004), pp. 799–815] is useful for modelling data that assume values in the standard unit interval (0, 1). The dependent variable relates to a linear predictor that includes regressors and unknown parameters through a link function. The model is also indexed by a precision parameter, which is typically taken to be constant for all observations. Some authors have used, however, variable dispersion beta regression models, i.e., models that include a regression submodel for the precision parameter. In this paper, we show how to perform testing inference on the parameters that index the mean submodel without having to model the data precision. This strategy is useful as it is typically harder to model dispersion effects than mean effects. The proposed inference procedure is accurate even under variable dispersion. We present the results of extensive Monte Carlo simulations where our testing strategy is contrasted to that in which the practitioner models the underlying dispersion and then performs testing inference. An empirical application that uses real (not simulated) data is also presented and discussed.  相似文献   

18.
Directional testing of vector parameters, based on higher order approximations of likelihood theory, can ensure extremely accurate inference, even in high‐dimensional settings where standard first order likelihood results can perform poorly. Here we explore examples of directional inference where the calculations can be simplified, and prove that in several classical situations, the directional test reproduces exact results based on F‐tests. These findings give a new interpretation of some classical results and support the use of directional testing in general models, where exact solutions are typically not available. The Canadian Journal of Statistics 47: 619–627; 2019 © 2019 Statistical Society of Canada  相似文献   

19.
The concept of distribution form developed in Brenner and Fraser (1980) is modified and extended to cover the more general context involving a class of distribution for form. This extension underlies the choice of a particular structural model for the three-parameter Weibull in Evans, Fraser and Massam (1982). The extended definition of distribution form is based on the requirement of objectivity in modelling (Fraser 1979). Three characterizations of this objectivity each require that the class of response presentations have closure under composition and thus be expressible in terms of a group. In particular, this implies that empirical support would not observationally be available for that generalization of a structural model called astructured model (Fraser 1972; the term functional model has been used inappropriately by Bunke, 1975 and Dawid and Stone, 1982).  相似文献   

20.
The aim of this paper is to introduce new statistical criteria for estimation, suitable for inference in models with common continuous support. This proposal is in the direct line of a renewed interest for divergence based inference tools imbedding the most classical ones, such as maximum likelihood, Chi-square or Kullback–Leibler. General pseudodistances with decomposable structure are considered, they allowing defining minimum pseudodistance estimators, without using nonparametric density estimators. A special class of pseudodistances indexed by α>0α>0, leading for α↓0α0 to the Kullback–Leibler divergence, is presented in detail. Corresponding estimation criteria are developed and asymptotic properties are studied. The estimation method is then extended to regression models. Finally, some examples based on Monte Carlo simulations are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号