首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We use logistic model to get point and interval estimates of the marginal risk difference in observational studies and randomized trials with dichotomous outcome. We prove that the maximum likelihood estimate of the marginal risk difference is unbiased for finite sample and highly robust to the effects of dispersing covariates. We use approximate normal distribution of the maximum likelihood estimates of the logistic model parameters to get approximate distribution of the maximum likelihood estimate of the marginal risk difference and then the interval estimate of the marginal risk difference. We illustrate application of the method by a real medical example.  相似文献   

2.
Existing research on mixtures of regression models are limited to directly observed predictors. The estimation of mixtures of regression for measurement error data imposes challenges for statisticians. For linear regression models with measurement error data, the naive ordinary least squares method, which directly substitutes the observed surrogates for the unobserved error-prone variables, yields an inconsistent estimate for the regression coefficients. The same inconsistency also happens to the naive mixtures of regression estimate, which is based on the traditional maximum likelihood estimator and simply ignores the measurement error. To solve this inconsistency, we propose to use the deconvolution method to estimate the mixture likelihood of the observed surrogates. Then our proposed estimate is found by maximizing the estimated mixture likelihood. In addition, a generalized EM algorithm is also developed to find the estimate. The simulation results demonstrate that the proposed estimation procedures work well and perform much better than the naive estimates.  相似文献   

3.
Suppose that subjects in a population follow the model f   ( y * x *; ) where y * denotes a response, x * denotes a vector of covariates and is the parameter to be estimated. We consider response-biased sampling, in which a subject is observed with a probability which is a function of its response. Such response-biased sampling frequently occurs in econometrics, epidemiology and survey sampling. The semiparametric maximum likelihood estimate of is derived, along with its asymptotic normality, efficiency and variance estimates. The estimate proposed can be used as a maximum partial likelihood estimate in stratified response-selective sampling. Some computation algorithms are also provided.  相似文献   

4.
We present the maximum likelihood estimation (MLE) via particle swarm optimization (PSO) algorithm to estimate the mixture of two Weibull parameters with complete and multiple censored data. A simulation study is conducted to assess the performance of the MLE via PSO algorithm, quasi-Newton method and expectation-maximization (EM) algorithm for different parameter settings and sample sizes in both uncensored and censored cases. The simulation results showed that the PSO algorithm outperforms the quasi-Newton method and the EM algorithm in most cases regarding bias and root mean square errors. Two numerical examples are used to demonstrate the performance of our proposed method.  相似文献   

5.
We consider the use of Monte Carlo methods to obtain maximum likelihood estimates for random effects models and distinguish between the pointwise and functional approaches. We explore the relationship between the two approaches and compare them with the EM algorithm. The functional approach is more ambitious but the approximation is local in nature which we demonstrate graphically using two simple examples. A remedy is to obtain successively better approximations of the relative likelihood function near the true maximum likelihood estimate. To save computing time, we use only one Newton iteration to approximate the maximiser of each Monte Carlo likelihood and show that this is equivalent to the pointwise approach. The procedure is applied to fit a latent process model to a set of polio incidence data. The paper ends by a comparison between the marginal likelihood and the recently proposed hierarchical likelihood which avoids integration altogether.  相似文献   

6.
The non-parametric maximum likelihood estimator (NPMLE) of the distribution function with doubly censored data can be computed using the self-consistent algorithm (Turnbull, 1974). We extend the self-consistent algorithm to include a constraint on the NPMLE. We then show how to construct confidence intervals and test hypotheses based on the NPMLE via the empirical likelihood ratio. Finally, we present some numerical comparisons of the performance of the above method with another method that makes use of the influence functions.  相似文献   

7.
This article introduces a novel non parametric penalized likelihood hazard estimation when the censoring time is dependent on the failure time for each subject under observation. More specifically, we model this dependence using a copula, and the method of maximum penalized likelihood (MPL) is adopted to estimate the hazard function. We do not consider covariates in this article. The non negatively constrained MPL hazard estimation is obtained using a multiplicative iterative algorithm. The consistency results and the asymptotic properties of the proposed hazard estimator are derived. The simulation studies show that our MPL estimator under dependent censoring with an assumed copula model provides a better accuracy than the MPL estimator under independent censoring if the sign of dependence is correctly specified in the copula function. The proposed method is applied to a real dataset, with a sensitivity analysis performed over various values of correlation between failure and censoring times.  相似文献   

8.
In this article, we present the performance of the maximum likelihood estimates of the Burr XII parameters for constant-stress partially accelerated life tests under multiple censored data. Two maximum likelihood estimation methods are considered. One method is based on observed-data likelihood function and the maximum likelihood estimates are obtained by using the quasi-Newton algorithm. The other method is based on complete-data likelihood function and the maximum likelihood estimates are derived by using the expectation-maximization (EM) algorithm. The variance–covariance matrices are derived to construct the confidence intervals of the parameters. The performance of these two algorithms is compared with each other by a simulation study. The simulation results show that the maximum likelihood estimation via the EM algorithm outperforms the quasi-Newton algorithm in terms of the absolute relative bias, the bias, the root mean square error and the coverage rate. Finally, a numerical example is given to illustrate the performance of the proposed methods.  相似文献   

9.
The power of a clinical trial is partly dependent upon its sample size. With continuous data, the sample size needed to attain a desired power is a function of the within-group standard deviation. An estimate of this standard deviation can be obtained during the trial itself based upon interim data; the estimate is then used to re-estimate the sample size. Gould and Shih proposed a method, based on the EM algorithm, which they claim produces a maximum likelihood estimate of the within-group standard deviation while preserving the blind, and that the estimate is quite satisfactory. However, others have claimed that the method can produce non-unique and/or severe underestimates of the true within-group standard deviation. Here the method is thoroughly examined to resolve the conflicting claims and, via simulation, to assess its validity and the properties of its estimates. The results show that the apparent non-uniqueness of the method's estimate is due to an apparently innocuous alteration that Gould and Shih made to the EM algorithm. When this alteration is removed, the method is valid in that it produces the maximum likelihood estimate of the within-group standard deviation (and also of the within-group means). However, the estimate is negatively biased and has a large standard deviation. The simulations show that with a standardized difference of 1 or less, which is typical in most clinical trials, the standard deviation from the combined samples ignoring the groups is a better estimator, despite its obvious positive bias.  相似文献   

10.
We compare EM, SEM, and MCMC algorithms to estimate the parameters of the Gaussian mixture model. We focus on problems in estimation arising from the likelihood function having a sharp ridge or saddle points. We use both synthetic and empirical data with those features. The comparison includes Bayesian approaches with different prior specifications and various procedures to deal with label switching. Although the solutions provided by these stochastic algorithms are more often degenerate, we conclude that SEM and MCMC may display faster convergence and improve the ability to locate the global maximum of the likelihood function.  相似文献   

11.
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood‐based ABC procedures.  相似文献   

12.
We consider fitting the so‐called Emax model to continuous response data from clinical trials designed to investigate the dose–response relationship for an experimental compound. When there is insufficient information in the data to estimate all of the parameters because of the high dose asymptote being ill defined, maximum likelihood estimation fails to converge. We explore the use of either bootstrap resampling or the profile likelihood to make inferences about effects and doses required to give a particular effect, using limits on the parameter values to obtain the value of the maximum likelihood when the high dose asymptote is ill defined. The results obtained show these approaches to be comparable with or better than some others that have been used when maximum likelihood estimation fails to converge and that the profile likelihood method outperforms the method of bootstrap resampling used. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
For linear regression models with non normally distributed errors, the least squares estimate (LSE) will lose some efficiency compared to the maximum likelihood estimate (MLE). In this article, we propose a kernel density-based regression estimate (KDRE) that is adaptive to the unknown error distribution. The key idea is to approximate the likelihood function by using a nonparametric kernel density estimate of the error density based on some initial parameter estimate. The proposed estimate is shown to be asymptotically as efficient as the oracle MLE which assumes the error density were known. In addition, we propose an EM type algorithm to maximize the estimated likelihood function and show that the KDRE can be considered as an iterated weighted least squares estimate, which provides us some insights on the adaptiveness of KDRE to the unknown error distribution. Our Monte Carlo simulation studies show that, while comparable to the traditional LSE for normal errors, the proposed estimation procedure can have substantial efficiency gain for non normal errors. Moreover, the efficiency gain can be achieved even for a small sample size.  相似文献   

14.
This paper presents an easy-to-compute semi-parametric (SP) method to estimate a simple disequilibrium model proposed by Fair and Jaffee (1972). The proposed approach is based on a non-parametric interpretation of the EM (Expectation and Maximization) principle (Dempster et al; 1977) and the least squares method. The simple disequilibrium model includes the demand equation, the supply equation, and the condition that only the minimum of quantity demanded and quantity supplied is observed. The method used here allows one to consistently estimate the disequilibrium model without fully specifying the distribution of error terms in both demand and supply equations. Our Monte Carlo study suggests that the proposedestimator is better than the normal maximum likelihood estimator under asymmetric error distributions. and comparable to the nlaximunl likelihood estimator under synirnetric error distributions in finite samples. Aggregate U.S. labor market data from Quandt and Rosen (1988) is used to illustrate the procedure.  相似文献   

15.
We present a maximum likelihood estimation procedure for the multivariate frailty model. The estimation is based on a Monte Carlo EM algorithm. The expectation step is approximated by averaging over random samples drawn from the posterior distribution of the frailties using rejection sampling. The maximization step reduces to a standard partial likelihood maximization. We also propose a simple rule based on the relative change in the parameter estimates to decide on sample size in each iteration and a stopping time for the algorithm. An important new concept is acquiring absolute convergence of the algorithm through sample size determination and an efficient sampling technique. The method is illustrated using a rat carcinogenesis dataset and data on vase lifetimes of cut roses. The estimation results are compared with approximate inference based on penalized partial likelihood using these two examples. Unlike the penalized partial likelihood estimation, the proposed full maximum likelihood estimation method accounts for all the uncertainty while estimating standard errors for the parameters.  相似文献   

16.
In this paper, we expand a first-order nonlinear autoregressive (AR) model with skew normal innovations. A semiparametric method is proposed to estimate a nonlinear part of model by using the conditional least squares method for parametric estimation and the nonparametric kernel approach for the AR adjustment estimation. Then computational techniques for parameter estimation are carried out by the maximum likelihood (ML) approach using Expectation-Maximization (EM) type optimization and the explicit iterative form for the ML estimators are obtained. Furthermore, in a simulation study and a real application, the accuracy of the proposed methods is verified.  相似文献   

17.
We consider the estimation of life length of people who were born in the seventeenth or eighteenth century in England. The data consist of a sequence of times of life events that is either ended by a time of death or is right-censored by an unobserved time of migration. We propose a semi parametric model for the data and use a maximum likelihood method to estimate the unknown parameters in this model. We prove the consistency of the maximum likelihood estimators and describe an algorithm to obtain the estimates numerically. We have applied the algorithm to data and the estimates found are presented.  相似文献   

18.
Time series regression models have been widely studied in the literature by several authors. However, statistical analysis of replicated time series regression models has received little attention. In this paper, we study the application of the quasi-least squares method to estimate the parameters in a replicated time series model with errors that follow an autoregressive process of order p. We also discuss two other established methods for estimating the parameters: maximum likelihood assuming normality and the Yule-Walker method. When the number of repeated measurements is bounded and the number of replications n goes to infinity, the regression and the autocorrelation parameters are consistent and asymptotically normal for all three methods of estimation. Basically, the three methods estimate the regression parameter efficiently and differ in how they estimate the autocorrelation. When p=2, for normal data we use simulations to show that the quasi-least squares estimate of the autocorrelation is undoubtedly better than the Yule-Walker estimate. And the former estimate is as good as the maximum likelihood estimate almost over the entire parameter space.  相似文献   

19.
Summary.  We construct approximate confidence intervals for a nonparametric regression function, using polynomial splines with free-knot locations. The number of knots is determined by generalized cross-validation. The estimates of knot locations and coefficients are obtained through a non-linear least squares solution that corresponds to the maximum likelihood estimate. Confidence intervals are then constructed based on the asymptotic distribution of the maximum likelihood estimator. Average coverage probabilities and the accuracy of the estimate are examined via simulation. This includes comparisons between our method and some existing methods such as smoothing spline and variable knots selection as well as a Bayesian version of the variable knots method. Simulation results indicate that our method works well for smooth underlying functions and also reasonably well for discontinuous functions. It also performs well for fairly small sample sizes.  相似文献   

20.
Pan  Wei  Chappell  Rick 《Lifetime data analysis》1999,5(3):281-291
We show that under reasonable conditions the nonparametric maximum likelihood estimate (NPMLE) of the distribution function from left-truncated and case 1 interval-censored data is inconsistent, in contrast to the consistency properties of the NPMLE from only left-truncated data or only interval-censored data. However, the conditional NPMLE is shown to be consistent. Numerical examples are provided to illustrate their finite sample properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号