首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
Approximate Bayesian Inference for Survival Models   总被引:1,自引:0,他引:1  
Abstract. Bayesian analysis of time‐to‐event data, usually called survival analysis, has received increasing attention in the last years. In Cox‐type models it allows to use information from the full likelihood instead of from a partial likelihood, so that the baseline hazard function and the model parameters can be jointly estimated. In general, Bayesian methods permit a full and exact posterior inference for any parameter or predictive quantity of interest. On the other side, Bayesian inference often relies on Markov chain Monte Carlo (MCMC) techniques which, from the user point of view, may appear slow at delivering answers. In this article, we show how a new inferential tool named integrated nested Laplace approximations can be adapted and applied to many survival models making Bayesian analysis both fast and accurate without having to rely on MCMC‐based inference.  相似文献   

2.
In a single index Poisson regression model with unknown link function, the index parameter can be root- n consistently estimated by the method of pseudo maximum likelihood. In this paper, we study, by simulation arguments, the practical validity of the asymptotic behaviour of the pseudo maximum likelihood index estimator and of some associated cross-validation bandwidths. A robust practical rule for implementing the pseudo maximum likelihood estimation method is suggested, which uses the bootstrap for estimating the variance of the index estimator and a variant of bagging for numerically stabilizing its variance. Our method gives reasonable results even for moderate sized samples; thus, it can be used for doing statistical inference in practical situations. The procedure is illustrated through a real data example.  相似文献   

3.
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood‐based ABC procedures.  相似文献   

4.
李小胜  王申令 《统计研究》2016,33(11):85-92
本文首先构造线性约束条件下的多元线性回归模型的样本似然函数,利用Lagrange法证明其合理性。其次,从似然函数的角度讨论线性约束条件对模型参数的影响,对由传统理论得出的参数估计作出贝叶斯与经验贝叶斯的改进。做贝叶斯改进时,将矩阵正态-Wishart分布作为模型参数和精度阵的联合共轭先验分布,结合构造的似然函数得出参数的后验分布,计算出参数的贝叶斯估计;做经验贝叶斯改进时,将样本分组,从方差的角度讨论由子样得出的参数估计对总样本的参数估计的影响,计算出经验贝叶斯估计。最后,利用Matlab软件生成的随机矩阵做模拟。结果表明,这两种改进后的参数估计均较由传统理论得出的参数估计更精确,拟合结果的误差比更小,可信度更高,在大数据的情况下,这种计算方法的速度更快。  相似文献   

5.
Mixture cure models are widely used when a proportion of patients are cured. The proportional hazards mixture cure model and the accelerated failure time mixture cure model are the most popular models in practice. Usually the expectation–maximisation (EM) algorithm is applied to both models for parameter estimation. Bootstrap methods are used for variance estimation. In this paper we propose a smooth semi‐nonparametric (SNP) approach in which maximum likelihood is applied directly to mixture cure models for parameter estimation. The variance can be estimated by the inverse of the second derivative of the SNP likelihood. A comprehensive simulation study indicates good performance of the proposed method. We investigate stage effects in breast cancer by applying the proposed method to breast cancer data from the South Carolina Cancer Registry.  相似文献   

6.
A generalized form of the Poisson Distribution with two parameters will be estimated by the Bayesian technique. When one of the parameters is known, several important parametric functions will be estimated and a numerical comparison with estimates obtained by the methods of maximum likelihood and unbiased minimum variance will be drawn. The simplicity of the posterior distribution of the unknown parameter enables us to construct exact probability intervals, and to devise a statistic to test the homogeneity of several populations. When the two parameters are unknown, dependent priors are being considered. Although the posterior distributions are sensitive to the choice of the prior, the posterior estimates are very stable and we use the Pearson system of curves to construct approximate posterior confidence limits for the parameters.  相似文献   

7.
It is well known that the normal mixture with unequal variance has unbounded likelihood and thus the corresponding global maximum likelihood estimator (MLE) is undefined. One of the commonly used solutions is to put a constraint on the parameter space so that the likelihood is bounded and then one can run the EM algorithm on this constrained parameter space to find the constrained global MLE. However, choosing the constraint parameter is a difficult issue and in many cases different choices may give different constrained global MLE. In this article, we propose a profile log likelihood method and a graphical way to find the maximum interior mode. Based on our proposed method, we can also see how the constraint parameter, used in the constrained EM algorithm, affects the constrained global MLE. Using two simulation examples and a real data application, we demonstrate the success of our new method in solving the unboundness of the mixture likelihood and locating the maximum interior mode.  相似文献   

8.
The failure rate function commonly has a bathtub shape in practice. In this paper we discuss a regression model considering new Weibull extended distribution developed by Xie et al. (2002) that can be used to model this type of failure rate function. Assuming censored data, we discuss parameter estimation: maximum likelihood method and a Bayesian approach where Gibbs algorithms along with Metropolis steps are used to obtain the posterior summaries of interest. We derive the appropriate matrices for assessing the local influence on the parameter estimates under different perturbation schemes, and we also present some ways to perform global influence. Also, some discussions on case deletion influence diagnostics are developed for the joint posterior distribution based on the Kullback–Leibler divergence. Besides, for different parameter settings, sample sizes and censoring percentages, are performed various simulations and display and compare the empirical distribution of the Martingale-type residual with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to the martingale-type residual in log-Weibull extended models with censored data. Finally, we analyze a real data set under a log-Weibull extended regression model. We perform diagnostic analysis and model check based on the martingale-type residual to select an appropriate model.  相似文献   

9.
Bayesian synthetic likelihood (BSL) is now a well-established method for performing approximate Bayesian parameter estimation for simulation-based models that do not possess a tractable likelihood function. BSL approximates an intractable likelihood function of a carefully chosen summary statistic at a parameter value with a multivariate normal distribution. The mean and covariance matrix of this normal distribution are estimated from independent simulations of the model. Due to the parametric assumption implicit in BSL, it can be preferred to its nonparametric competitor, approximate Bayesian computation, in certain applications where a high-dimensional summary statistic is of interest. However, despite several successful applications of BSL, its widespread use in scientific fields may be hindered by the strong normality assumption. In this paper, we develop a semi-parametric approach to relax this assumption to an extent and maintain the computational advantages of BSL without any additional tuning. We test our new method, semiBSL, on several challenging examples involving simulated and real data and demonstrate that semiBSL can be significantly more robust than BSL and another approach in the literature.  相似文献   

10.
Based on record values, the maximum likelihood, minimum variance unbiased and Bayes estimators of the one parameter of the Burr type X distribution are computed and compared. The Bayesian and non-Bayesian confidence intervals for this parameter are also presented. A Bayesian prediction interval for the sth future record is obtained in a closed form. Based on simulated record values, numerical computations and comparisons between the different estimators are given  相似文献   

11.
Probabilistic matching of records is widely used to create linked data sets for use in health science, epidemiological, economic, demographic and sociological research. Clearly, this type of matching can lead to linkage errors, which in turn can lead to bias and increased variability when standard statistical estimation techniques are used with the linked data. In this paper we develop unbiased regression parameter estimates to be used when fitting a linear model with nested errors to probabilistically linked data. Since estimation of variance components is typically an important objective when fitting such a model, we also develop appropriate modifications to standard methods of variance components estimation in order to account for linkage error. In particular, we focus on three widely used methods of variance components estimation: analysis of variance, maximum likelihood and restricted maximum likelihood. Simulation results show that our estimators perform reasonably well when compared to standard estimation methods that ignore linkage errors.  相似文献   

12.
We propose a general family of nonparametric mixed effects models. Smoothing splines are used to model the fixed effects and are estimated by maximizing the penalized likelihood function. The random effects are generic and are modelled parametrically by assuming that the covariance function depends on a parsimonious set of parameters. These parameters and the smoothing parameter are estimated simultaneously by the generalized maximum likelihood method. We derive a connection between a nonparametric mixed effects model and a linear mixed effects model. This connection suggests a way of fitting a nonparametric mixed effects model by using existing programs. The classical two-way mixed models and growth curve models are used as examples to demonstrate how to use smoothing spline analysis-of-variance decompositions to build nonparametric mixed effects models. Similarly to the classical analysis of variance, components of these nonparametric mixed effects models can be interpreted as main effects and interactions. The penalized likelihood estimates of the fixed effects in a two-way mixed model are extensions of James–Stein shrinkage estimates to correlated observations. In an example three nested nonparametric mixed effects models are fitted to a longitudinal data set.  相似文献   

13.
We show that the mean-model parameter is always orthogonal to the error distribution in generalized linear models. Thus, the maximum likelihood estimator of the mean-model parameter will be asymptotically efficient regardless of whether the error distribution is known completely, known up to a finite vector of parameters, or left completely unspecified, in which case the likelihood is taken to be an appropriate semiparametric likelihood. Moreover, the maximum likelihood estimator of the mean-model parameter will be asymptotically independent of the maximum likelihood estimator of the error distribution. This generalizes some well-known results for the special cases of normal, gamma, and multinomial regression models, and, perhaps more interestingly, suggests that asymptotically efficient estimation and inferences can always be obtained if the error distribution is non parametrically estimated along with the mean. In contrast, estimation and inferences using misspecified error distributions or variance functions are generally not efficient.  相似文献   

14.
Classical analysis of contingency tables employs (i) fixed sample sizes and (ii) the maximum likelihood and weighted least squares approach to parameter estimation. It is well-known, however, that certain important parameters, such as the main effect and interaction parameters, can neverbe estimated unbiasedly when the sample size is fixed a priori We introduce a sequential unbiased estimator for the cell probabilities subject to log linear constraints. As a simple consequence, we show how parameters such as those mentioned above may. be estimated unbiasedly. Our unbiased estimator for the vector of cell probabilities is shown to be consistent in the sense of Wolfowitz (Ann. Math. Statist. (1947) 18). We give a sufficient condition on a multinomial stopping rule for the corresponding sufficient statistic to be complete. When this condition holds, we have a unique minimum variance unbiased estimator for the cell probabilities.  相似文献   

15.
Competing risks models are of great importance in reliability and survival analysis. They are often assumed to have independent causes of failure in literature, which may be unreasonable. In this article, dependent causes of failure are considered by using the Marshall–Olkin bivariate Weibull distribution. After deriving some useful results for the model, we use ML, fiducial inference, and Bayesian methods to estimate the unknown model parameters with a parameter transformation. Simulation studies are carried out to assess the performances of the three methods. Compared with the maximum likelihood method, the fiducial and Bayesian methods could provide better parameter estimation.  相似文献   

16.
In the framework of model-based cluster analysis, finite mixtures of Gaussian components represent an important class of statistical models widely employed for dealing with quantitative variables. Within this class, we propose novel models in which constraints on the component-specific variance matrices allow us to define Gaussian parsimonious clustering models. Specifically, the proposed models are obtained by assuming that the variables can be partitioned into groups resulting to be conditionally independent within components, thus producing component-specific variance matrices with a block diagonal structure. This approach allows us to extend the methods for model-based cluster analysis and to make them more flexible and versatile. In this paper, Gaussian mixture models are studied under the above mentioned assumption. Identifiability conditions are proved and the model parameters are estimated through the maximum likelihood method by using the Expectation-Maximization algorithm. The Bayesian information criterion is proposed for selecting the partition of the variables into conditionally independent groups. The consistency of the use of this criterion is proved under regularity conditions. In order to examine and compare models with different partitions of the set of variables a hierarchical algorithm is suggested. A wide class of parsimonious Gaussian models is also presented by parameterizing the component-variance matrices according to their spectral decomposition. The effectiveness and usefulness of the proposed methodology are illustrated with two examples based on real datasets.  相似文献   

17.
Recently, Bolfarine et al. [Bimodal symmetric-asymmetric power-normal families. Commun Statist Theory Methods. Forthcoming. doi:10.1080/03610926.2013.765475] introduced a bimodal asymmetric model having the normal and skew normal as special cases. Here, we prove a stochastic representation for their bimodal asymmetric model and use it to generate random numbers from that model. It is shown how the resulting algorithm can be seen as an improvement over the rejection method. We also discuss practical and numerical aspects regarding the estimation of the model parameters by maximum likelihood under simple random sampling. We show that a unique stationary point of the likelihood equations exists except when all observations have the same sign. However, the location-scale extension of the model usually presents two or more roots and this fact is illustrated here. The standard maximization routines available in the R system (Broyden–Fletcher–Goldfarb–Shanno (BFGS), Trust, Nelder–Mead) were considered in our implementations but exhibited similar performance. We show the usefulness of inspecting profile loglikelihoods as a method to obtain starting values for maximization and illustrate data analysis with the location-scale model in the presence of multiple roots. A simple Bayesian model is discussed in the context of a data set which presents a flat likelihood in the direction of the skewness parameter.  相似文献   

18.
The maximum likelihood and Bayesian approaches have been considered for the two-parameter generalized exponential distribution based on record values with the number of trials following the record values (inter-record times). The maximum likelihood estimates are obtained under the inverse sampling and the random sampling schemes. It is shown that the maximum likelihood estimator of the shape parameter converges in mean square to the true value when the scale parameter is known. The Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo methods due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The confidence intervals for the parameters are constructed based on asymptotic and Bayesian methods. The Bayes and the maximum likelihood estimators are compared in terms of the estimated risk by the Monte Carlo simulations. The comparison of the estimators based on the record values and the record values with their corresponding inter-record times are performed by using Monte Carlo simulations.  相似文献   

19.
Summary. The task of estimating an integral by Monte Carlo methods is formulated as a statistical model using simulated observations as data. The difficulty in this exercise is that we ordinarily have at our disposal all of the information required to compute integrals exactly by calculus or numerical integration, but we choose to ignore some of the information for simplicity or computational feasibility. Our proposal is to use a semiparametric statistical model that makes explicit what information is ignored and what information is retained. The parameter space in this model is a set of measures on the sample space, which is ordinarily an infinite dimensional object. None-the-less, from simulated data the base-line measure can be estimated by maximum likelihood, and the required integrals computed by a simple formula previously derived by Vardi and by Lindsay in a closely related model for biased sampling. The same formula was also suggested by Geyer and by Meng and Wong using entirely different arguments. By contrast with Geyer's retrospective likelihood, a correct estimate of simulation error is available directly from the Fisher information. The principal advantage of the semiparametric model is that variance reduction techniques are associated with submodels in which the maximum likelihood estimator in the submodel may have substantially smaller variance than the traditional estimator. The method is applicable to Markov chain and more general Monte Carlo sampling schemes with multiple samplers.  相似文献   

20.
In this paper, we consider the problem of estimation of semi-linear regression models. Using invariance arguments, Bhowmik and King [2007. Maximal invariant likelihood based testing of semi-linear models. Statist. Papers 48, 357–383] derived the probability density function of the maximal invariant statistic for the non-linear component of these models. Using this density function as a likelihood function allows us to estimate these models in a two-step process. First the non-linear component parameters are estimated by maximising the maximal invariant likelihood function. Then the non-linear component, with the parameter values replaced by estimates, is treated as a regressor and ordinary least squares is used to estimate the remaining parameters. We report the results of a simulation study conducted to compare the accuracy of this approach with full maximum likelihood and maximum profile-marginal likelihood estimation. We find maximising the maximal invariant likelihood function typically results in less biased and lower variance estimates than those from full maximum likelihood.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号