首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The size distortion problem is clearly indicative of the small-sample approximation in the Markov-switching regression model. This paper shows that the bootstrap procedure can relieve the effects that this problem has. Our Monte Carlo simulation results reveal that the bootstrap maximum likelihood asymptotic approximations to the distribution can often be good when the sample size is small.  相似文献   

2.
When variable selection with stepwise regression and model fitting are conducted on the same data set, competition for inclusion in the model induces a selection bias in coefficient estimators away from zero. In proportional hazards regression with right-censored data, selection bias inflates the absolute value of parameter estimate of selected parameters, while the omission of other variables may shrink coefficients toward zero. This paper explores the extent of the bias in parameter estimates from stepwise proportional hazards regression and proposes a bootstrap method, similar to those proposed by Miller (Subset Selection in Regression, 2nd edn. Chapman & Hall/CRC, 2002) for linear regression, to correct for selection bias. We also use bootstrap methods to estimate the standard error of the adjusted estimators. Simulation results show that substantial biases could be present in uncorrected stepwise estimators and, for binary covariates, could exceed 250% of the true parameter value. The simulations also show that the conditional mean of the proposed bootstrap bias-corrected parameter estimator, given that a variable is selected, is moved closer to the unconditional mean of the standard partial likelihood estimator in the chosen model, and to the population value of the parameter. We also explore the effect of the adjustment on estimates of log relative risk, given the values of the covariates in a selected model. The proposed method is illustrated with data sets in primary biliary cirrhosis and in multiple myeloma from the Eastern Cooperative Oncology Group.  相似文献   

3.
Bootstrapping has been used as a diagnostic tool for validating model results for a wide array of statistical models. Here we evaluate the use of the non-parametric bootstrap for model validation in mixture models. We show that the bootstrap is problematic for validating the results of class enumeration and demonstrating the stability of parameter estimates in both finite mixture and regression mixture models. In only 44% of simulations did bootstrapping detect the correct number of classes in at least 90% of the bootstrap samples for a finite mixture model without any model violations. For regression mixture models and cases with violated model assumptions, the performance was even worse. Consequently, we cannot recommend the non-parametric bootstrap for validating mixture models.

The cause of the problem is that when resampling is used influential individual observations have a high likelihood of being sampled many times. The presence of multiple replications of even moderately extreme observations is shown to lead to additional latent classes being extracted. To verify that these replications cause the problems we show that leave-k-out cross-validation where sub-samples taken without replacement does not suffer from the same problem.  相似文献   


4.
Assume observed X has the Poisson distribution with mean Xp, where X > 0 is fixed and unknown but p has a known probability distribution. This model is studied and compared with the approximate model X Poisson (XEp). One result is that if p has the uniform distribution on (0, 1), then the model has monotone likelihood ratio (MLR). In certain other cases MLR may fail  相似文献   

5.
We consider fitting the so‐called Emax model to continuous response data from clinical trials designed to investigate the dose–response relationship for an experimental compound. When there is insufficient information in the data to estimate all of the parameters because of the high dose asymptote being ill defined, maximum likelihood estimation fails to converge. We explore the use of either bootstrap resampling or the profile likelihood to make inferences about effects and doses required to give a particular effect, using limits on the parameter values to obtain the value of the maximum likelihood when the high dose asymptote is ill defined. The results obtained show these approaches to be comparable with or better than some others that have been used when maximum likelihood estimation fails to converge and that the profile likelihood method outperforms the method of bootstrap resampling used. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
We derive analytic expressions for the biases of the maximum likelihood estimators of the scale parameter in the half-logistic distribution with known location, and of the location parameter when the latter is unknown. Using these expressions to bias-correct the estimators is highly effective, without adverse consequences for estimation mean squared error. The overall performance of the first of these bias-corrected estimators is slightly better than that of a bootstrap bias-corrected estimator. The bias-corrected estimator of the location parameter significantly out-performs its bootstrapped-based counterpart. Taking computational costs into account, the analytic bias corrections clearly dominate the use of the bootstrap.  相似文献   

7.
Sun W  Li H 《Lifetime data analysis》2004,10(3):229-245
The additive genetic gamma frailty model has been proposed for genetic linkage analysis for complex diseases to account for variable age of onset and possible covariates effects. To avoid ascertainment biases in parameter estimates, retrospective likelihood ratio tests are often used, which may result in loss of efficiency due to conditioning. This paper considers when the sibships are ascertained by having at least two affected sibs with the disease before a given age and provides two approaches for estimating the parameters in the additive gamma frailty model. One approach is based on the likelihood function conditioning on the ascertainment event, the other is based on maximizing a full ascertainment-adjusted likelihood. Explicit forms for these likelihood functions are derived. Simulation studies indicate that when the baseline hazard function can be correctly pre-specified, both approaches give accurate estimates of the model parameters. However, when the baseline hazard function has to be estimated simultaneously, only the ascertainment-adjusted likelihood method gives an unbiased estimate of the parameters. These results imply that the ascertainment-adjusted likelihood ratio test in the context of the additive genetic gamma frailty may be used for genetic linkage analysis.  相似文献   

8.
We consider the issue of performing accurate small sample inference in beta autoregressive moving average model, which is useful for modeling and forecasting continuous variables that assume values in the interval (0,?1). The inferences based on conditional maximum likelihood estimation have good asymptotic properties, but their performances in small samples may be poor. This way, we propose bootstrap bias corrections of the point estimators and different bootstrap strategies for confidence interval improvements. Our Monte Carlo simulations show that finite sample inference based on bootstrap corrections is much more reliable than the usual inferences. We also presented an empirical application.  相似文献   

9.
Summary.  We consider maximum likelihood methods for estimating the end point of a distribution. The likelihood function is modified by a prior distribution that is imposed on the location parameter. The prior is explicit and meaningful, and has a general form that adapts itself to different settings. Results on convergence rates and limiting distributions are given. In particular, it is shown that the limiting distribution is non-normal in non-regular cases. Parametric bootstrap techniques are suggested for quantifying the accuracy of the estimator. We illustrate performance by applying the method to multiparameter Weibull and gamma distributions.  相似文献   

10.
In most software reliability models which utilize the nonhomogeneous Poisson process (NHPP), the intensity function for the counting process is usually assumed to be continuous and monotone. However, on account of various practical reasons, there may exist some change points in the intensity function and thus the assumption of continuous and monotone intensity function may be unrealistic in many real situations. In this article, the Bayesian change-point approach using beta-mixtures for modeling the intensity function with possible change points is proposed. The hidden Markov model with non constant transition probabilities is applied to the beta-mixture for detecting the change points of the parameters. The estimation and interpretation of the model is illustrated using the Naval Tactical Data System (NTDS) data. The proposed change point model will be also compared with the competing models via marginal likelihood. It can be seen that the proposed model has the highest marginal likelihood and outperforms the competing models.  相似文献   

11.
One problem of skew normal model is the difficulty in estimating the shape parameter, for which the maximum likelihood estimate may be infinite when sample size is moderate. The existing estimators suffer from large bias even for moderate size samples. In this article, we proposed five estimators of the shape parameter for a scalar skew normal model, either by bias correction method or by solving a modified score equation. Simulation studies show that except bootstrap estimator, the proposed estimators have smaller bias compared to those estimators in literature for small and moderate samples.  相似文献   

12.
The author shows how to find M‐estimators of location whose generating function is monotone and which are optimal or close to optimal. It is easy to identify a consistent sequence of estimators in this class. In addition, it contains simple and efficient approximations in cases where the likelihood function is difficult to obtain. In some neighbourhoods of the normal distribution, the loss of efficiency due to the approximation is quite small. Optimal monotone M‐estimators can also be determined in cases when the underlying distribution is known only up to a certain neighbourhood. The author considers the e‐contamination model and an extension thereof that allows the distributions to be arbitrary outside compact intervals. His results also have implications for distributions with monotone score functions. The author illustrates his methodology using Student and stable distributions.  相似文献   

13.
Ante-dependence models can be used to model the covariance structure in problems involving repeated measures through time. They are conditional regression models which generalize Gabriel’s constant-order ante-dependence model. Likelihood-based procedures are presented, together with simple expressions for likelihood ratio test statistics in terms of sum of squares from appropriate analysis of covariance. The estimation of the orders is approached as a model selection problem, and penalized likelihood criteria are suggested. Extensions of all procedures discussed here to situations with a monotone pattern of missing data are presented.  相似文献   

14.
We consider asymptotic properties of the maximum likelihood and related estimators in a clustered logistic joinpoint model with an unknown joinpoint. Sufficient conditions are given for the consistency of confidence bounds produced by the parametric bootstrap; one of the conditions required is that the true location of the joinpoint is not at one of the observation times. A simulation study is presented to illustrate the lack of consistency of the bootstrap confidence bounds when the joinpoint is an observation time. A removal algorithm is presented which corrects this problem, but at the price of an increased mean square error. Finally, the methods are applied to data on yearly cancer mortality in the US for individuals age 65 and over.  相似文献   

15.
We derive analytic expressions for the biases, to O(n? 1), of the maximum likelihood estimators of the parameters of the generalized Rayleigh distribution family. Using these expressions to bias-correct the estimators is found to be extremely effective in terms of bias reduction, and generally results in a small reduction in relative mean squared error. In general, the analytic bias-corrected estimators are also found to be superior to the alternative of bias-correction via the bootstrap.  相似文献   

16.
Abstract. In this article, we develop a test for the null hypothesis that a real‐valued function belongs to a given parametric set against the non‐parametric alternative that it is monotone, say decreasing. The method is described in a general model that covers the monotone density model, the monotone regression and the right‐censoring model with monotone hazard rate. The criterion for testing is an ‐distance between a Grenander‐type non‐parametric estimator and a parametric estimator computed under the null hypothesis. A normalized version of this distance is shown to have an asymptotic normal distribution under the null, whence a test can be developed. Moreover, a bootstrap procedure is shown to be consistent to calibrate the test.  相似文献   

17.
The two-parameter weighted Lindley distribution is useful for modeling survival data, whereas its maximum likelihood estimators (MLEs) are biased in finite samples. This motivates us to construct nearly unbiased estimators for the unknown parameters. We adopt a “corrective” approach to derive modified MLEs that are bias-free to second order. We also consider an alternative bias-correction mechanism based on Efron’s bootstrap resampling. Monte Carlo simulations are conducted to compare the performance between the proposed and two previous methods in the literature. The numerical evidence shows that the bias-corrected estimators are extremely accurate even for very small sample sizes and are superior than the previous estimators in terms of biases and root mean squared errors. Finally, applications to two real datasets are presented for illustrative purposes.  相似文献   

18.
We propose a Bayesian approach for estimating the hazard functions under the constraint of a monotone hazard ratio. We construct a model for the monotone hazard ratio utilizing the Cox’s proportional hazards model with a monotone time-dependent coefficient. To reduce computational complexity, we use a signed gamma process prior for the time-dependent coefficient and the Bayesian bootstrap prior for the baseline hazard function. We develope an efficient MCMC algorithm and illustrate the proposed method on simulated and real data sets.  相似文献   

19.
Inverse sampling is an appropriate design for the second phase of capture-recapture experiments which provides an exactly unbiased estimator of the population size. However, the sampling distribution of the resulting estimator tends to be highly right skewed for small recapture samples, so, the traditional Wald-type confidence intervals appear to be inappropriate. The objective of this paper is to study the performance of interval estimators for the population size under inverse recapture sampling without replacement. To this aim, we consider the Wald-type, the logarithmic transformation-based, the Wilson score, the likelihood ratio and the exact methods. Also, we propose some bootstrap confidence intervals for the population size, including the with-replacement bootstrap (BWR), the without replacement bootstrap (BWO), and the Rao–Wu’s rescaling method. A Monte Carlo simulation is employed to evaluate the performance of suggested methods in terms of the coverage probability, error rates and standardized average length. Our results show that the likelihood ratio and exact confidence intervals are preferred to other competitors, having the coverage probabilities close to the desired nominal level for any sample size, with more balanced error rate for exact method and shorter length for likelihood ratio method. It is notable that the BWO and Rao–Wu’s rescaling methods also may provide good intervals for some situations, however, those coverage probabilities are not invariant with respect to the population arguments, so one must be careful to use them.  相似文献   

20.
Summary.  We consider a finite mixture model with k components and a kernel distribution from a general one-parameter family. The problem of testing the hypothesis k =2 versus k 3 is studied. There has been no general statistical testing procedure for this problem. We propose a modified likelihood ratio statistic where under the null and the alternative hypotheses the estimates of the parameters are obtained from a modified likelihood function. It is shown that estimators of the support points are consistent. The asymptotic null distribution of the modified likelihood ratio test proposed is derived and found to be relatively simple and easily applied. Simulation studies for the asymptotic modified likelihood ratio test based on finite mixture models with normal, binomial and Poisson kernels suggest that the test proposed performs well. Simulation studies are also conducted for a bootstrap method with normal kernels. An example involving foetal movement data from a medical study illustrates the testing procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号