首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper addresses the problem of obtaining maximum likelihood estimates for the parameters of the Pearson Type I distribution (beta distribution with unknown end points and shape parameters). Since they do not seem to have appeared in the literature, the likelihood equations and the information matrix are derived. The regularity conditions which ensure asymptotic normality and efficiency are examined, and some apparent conflicts in the literature are noted. To ensure regularity, the shape parameters must be greater than two, giving an (assymmetrical) bell-shaped distribution with high contact in the tails. A numerical investigation was carried out to explore the bias and variance of the maximum likelihood estimates and their dependence on sample size. The numerical study indicated that only for large samples (n ≥ 1000) does the bias in the estimates become small and does the Cramér-Rao bound give a good approximation for their variance. The likelihood function has a global maximum which corresponds to parameter estimates that are inadmissable. Useful parameter estimates can be obtained at a local maximum, which is sometimes difficult to locate when the sample size is small.  相似文献   

2.
Elimination of a nuisance variable is often non‐trivial and may involve the evaluation of an intractable integral. One approach to evaluate these integrals is to use the Laplace approximation. This paper concentrates on a new approximation, called the partial Laplace approximation, that is useful when the integrand can be partitioned into two multiplicative disjoint functions. The technique is applied to the linear mixed model and shows that the approximate likelihood obtained can be partitioned to provide a conditional likelihood for the location parameters and a marginal likelihood for the scale parameters equivalent to restricted maximum likelihood (REML). Similarly, the partial Laplace approximation is applied to the t‐distribution to obtain an approximate REML for the scale parameter. A simulation study reveals that, in comparison to maximum likelihood, the scale parameter estimates of the t‐distribution obtained from the approximate REML show reduced bias.  相似文献   

3.
We postulate a dynamic spatio-temporal model with constant covariate effect but with varying spatial effect over time and varying temporal effect across locations. To mitigate the effect of temporary structural change, the model can be estimated using the backfitting algorithm embedded with forward search algorithm and bootstrap. A simulation study is designed to evaluate structural optimality of the model with the estimation procedure. The fitted model exhibit superior predictive ability relative to the linear model. The proposed algorithm also consistently produced lower relative bias and standard errors for the spatial parameter estimates. While additional neighbourhoods do not necessarily improve predictive ability of the model, it trims down relative bias on the parameter estimates, specially for spatial parameter. Location of the temporary structural change along with the degree of structural change contributes to lower relative bias of parameter estimates and in better predictive ability of the model. The estimation procedure is able to produce parameter estimates that are robust to the occurrence of temporary structural change.  相似文献   

4.
5.
It is well established that bandwidths exist that can yield an unbiased non–parametric kernel density estimate at points in particular regions (e.g. convex regions) of the underlying density. These zero–bias bandwidths have superior theoretical properties, including a 1/n convergence rate of the mean squared error. However, the explicit functional form of the zero–bias bandwidth has remained elusive. It is difficult to estimate these bandwidths and virtually impossible to achieve the higher–order rate in practice. This paper addresses these issues by taking a fundamentally different approach to the asymptotics of the kernel density estimator to derive a functional approximation to the zero–bias bandwidth. It develops a simple approximation algorithm that focuses on estimating these zero–bias bandwidths in the tails of densities where the convexity conditions favourable to the existence of the zerobias bandwidths are more natural. The estimated bandwidths yield density estimates with mean squared error that is O(n–4/5), the same rate as the mean squared error of density estimates with other choices of local bandwidths. Simulation studies and an illustrative example with air pollution data show that these estimated zero–bias bandwidths outperform other global and local bandwidth estimators in estimating points in the tails of densities.  相似文献   

6.
During recent years, analysts have been relying on approximate methods of inference to estimate multilevel models for binary or count data. In an earlier study of random-intercept models for binary outcomes we used simulated data to demonstrate that one such approximation, known as marginal quasi-likelihood, leads to a substantial attenuation bias in the estimates of both fixed and random effects whenever the random effects are non-trivial. In this paper, we fit three-level random-intercept models to actual data for two binary outcomes, to assess whether refined approximation procedures, namely penalized quasi-likelihood and second-order improvements to marginal and penalized quasi-likelihood, also underestimate the underlying parameters. The extent of the bias is assessed by two standards of comparison: exact maximum likelihood estimates, based on a Gauss–Hermite numerical quadrature procedure, and a set of Bayesian estimates, obtained from Gibbs sampling with diffuse priors. We also examine the effectiveness of a parametric bootstrap procedure for reducing the bias. The results indicate that second-order penalized quasi-likelihood estimates provide a considerable improvement over the other approximations, but all the methods of approximate inference result in a substantial underestimation of the fixed and random effects when the random effects are sizable. We also find that the parametric bootstrap method can eliminate the bias but is computationally very intensive.  相似文献   

7.
When variable selection with stepwise regression and model fitting are conducted on the same data set, competition for inclusion in the model induces a selection bias in coefficient estimators away from zero. In proportional hazards regression with right-censored data, selection bias inflates the absolute value of parameter estimate of selected parameters, while the omission of other variables may shrink coefficients toward zero. This paper explores the extent of the bias in parameter estimates from stepwise proportional hazards regression and proposes a bootstrap method, similar to those proposed by Miller (Subset Selection in Regression, 2nd edn. Chapman & Hall/CRC, 2002) for linear regression, to correct for selection bias. We also use bootstrap methods to estimate the standard error of the adjusted estimators. Simulation results show that substantial biases could be present in uncorrected stepwise estimators and, for binary covariates, could exceed 250% of the true parameter value. The simulations also show that the conditional mean of the proposed bootstrap bias-corrected parameter estimator, given that a variable is selected, is moved closer to the unconditional mean of the standard partial likelihood estimator in the chosen model, and to the population value of the parameter. We also explore the effect of the adjustment on estimates of log relative risk, given the values of the covariates in a selected model. The proposed method is illustrated with data sets in primary biliary cirrhosis and in multiple myeloma from the Eastern Cooperative Oncology Group.  相似文献   

8.
Maximum likelihood estimates (MLEs) for logistic regression coefficients are known to be biased in finite samples and consequently may produce misleading inferences. Bias adjusted estimates can be calculated using the first-order asymptotic bias derived from a Taylor series expansion of the log likelihood. Jackknifing can also be used to obtain bias corrected estimates, but the approach is computationally intensive, requiring an additional series of iterations (steps) for each observation in the dataset.Although the one-step jackknife has been shown to be useful in logistic regression diagnostics and i the estimation of classification error rates, it does not effectively reduce bias. The two-step jackknife, however, can reduce computation in moderate-sized samples, provide estimates of dispersion and classification error, and appears to be effective in bias reduction. Another alternative, a two-step closed-form approximation, is found to be similar to the Taylo series method in certain circumstances. Monte Carlo simulations indicate that all the procedures, but particularly the multi-step jackknife, may tend to over-correct in very small samples. Comparison of the various bias correction proceduresin an example from the medical literature illustrates that bias correction can have a considerable impact on inference  相似文献   

9.
Inference for a generalized linear model is generally performed using asymptotic approximations for the bias and the covariance matrix of the parameter estimators. For small experiments, these approximations can be poor and result in estimators with considerable bias. We investigate the properties of designs for small experiments when the response is described by a simple logistic regression model and parameter estimators are to be obtained by the maximum penalized likelihood method of Firth [Firth, D., 1993, Bias reduction of maximum likelihood estimates. Biometrika, 80, 27–38]. Although this method achieves a reduction in bias, we illustrate that the remaining bias may be substantial for small experiments, and propose minimization of the integrated mean square error, based on Firth's estimates, as a suitable criterion for design selection. This approach is used to find locally optimal designs for two support points.  相似文献   

10.
In this article, we assess Bayesian estimation and prediction using integrated Laplace approximation (INLA) on a stochastic volatility (SV) model. This was performed through a Monte Carlo study with 1,000 simulated time series. To evaluate the estimation method, two criteria were considered: the bias and square root of the mean square error (smse). The criteria used for prediction are the one step ahead forecast of volatility and the one day Value at Risk (VaR). The main findings are that the INLA approximations are fairly accurate and relatively robust to the choice of prior distribution on the persistence parameter. Additionally, VaR estimates are computed and compared for three financial time series returns indexes.  相似文献   

11.
The primary purpose of this paper is that of developing a sequential Monte Carlo approximation to an ideal bootstrap estimate of the parameter of interest. Using the concept of fixed-precision approximation, we construct a sequential stopping rule for determining the number of bootstrap samples to be taken in order to achieve a specified precision of the Monte Carlo approximation. It is shown that the sequential Monte Carlo approximation is asymptotically efficient in the problems of estimation of the bias and standard error of a given statistic. Efficient bootstrap resampling is discussed and a numerical study is carried out for illustrating the obtained theoretical results.  相似文献   

12.
A correlated probit model approximation for conditional probabilities (Mendell and Elston 1974) is used to estimate the variance for binary matched pairs data by maximum likelihood. Using asymptotic data, the bias of the estimates is shown to be small for a wide range of intra-class correlations and incidences. This approximation is also compared with other recently published, or implemented, improved approximations. For the small sample examples presented, it shows a substantial advantage over other approximations. The method is extended to allow covariates for each observation, and fitting by iteratively reweighted least squares.  相似文献   

13.
Summary.  We present models for the combined analysis of evidence from randomized controlled trials categorized as being at either low or high risk of bias due to a flaw in their conduct. We formulate a bias model that incorporates between-study and between-meta-analysis heterogeneity in bias, and uncertainty in overall mean bias. We obtain algebraic expressions for the posterior distribution of the bias-adjusted treatment effect, which provide limiting values for the information that can be obtained from studies at high risk of bias. The parameters of the bias model can be estimated from collections of previously published meta-analyses. We explore alternative models for such data, and alternative methods for introducing prior information on the bias parameters into a new meta-analysis. Results from an illustrative example show that the bias-adjusted treatment effect estimates are sensitive to the way in which the meta-epidemiological data are modelled, but that using point estimates for bias parameters provides an adequate approximation to using a full joint prior distribution. A sensitivity analysis shows that the gain in precision from including studies at high risk of bias is likely to be low, however numerous or large their size, and that little is gained by incorporating such studies, unless the information from studies at low risk of bias is limited. We discuss approaches that might increase the value of including studies at high risk of bias, and the acceptability of the methods in the evaluation of health care interventions.  相似文献   

14.
This paper investigates bias in parameter estimates and residual diagnostics for parametric multinomial models by considering the effect of deleting a cell. In particular, it describes the average changes in the standardized residuals and maximum likelihood estimates resulting from conditioning on the given cells. These changes suggest how individual cell observations affect biases. Emphasis is placed on the role of individual cell observations in determining bias and on how bias affects standard diagnostic methods. Examples from genetics and log–linear models are considered. Numerical results show that conditioning on an influential cell results in substantial changes in biases.  相似文献   

15.
A simulation study of the binomial-logit model with correlated random effects is carried out based on the generalized linear mixed model (GLMM) methodology. Simulated data with various numbers of regression parameters and different values of the variance component are considered. The performance of approximate maximum likelihood (ML) and residual maximum likelihood (REML) estimators is evaluated. For a range of true parameter values, we report the average biases of estimators, the standard error of the average bias and the standard error of estimates over the simulations. In general, in terms of bias, the two methods do not show significant differences in estimating regression parameters. The REML estimation method is slightly better in reducing the bias of variance component estimates.  相似文献   

16.
A methodology is presented for gaining insight into properties — such as outlier influence, bias, and width of confidence intervals — of maximum likelihood estimates from nonidentically distributed Gaussian data. The methodology is based on an application of the implicit function theorem to derive an approximation to the maximum likelihood estimator. This approximation, unlike the maximum likelihood estimator, is expressed in closed form and thus it can be used in lieu of costly Monte Carlo simulation to study the properties of the maximum likelihood estimator.  相似文献   

17.
The article presents careful comparisons among several empirical Bayes estimates to the precision parameter of Dirichlet process prior, with the setup of univariate observations and multigroup data. Specifically, the data are equipped with a two-stage compound sampling model, where the prior is assumed as a Dirichlet process that follows within a Bayesian nonparametric framework. The precision parameter α measures the strength of the prior belief and kinds of estimates are generated on the basis of observations, including the naive estimate, two calibrated naive estimates, and two different types of maximum likelihood estimates stemming from distinct distributions. We explore some theoretical properties and provide explicitly detailed comparisons among these estimates, in the perspectives of bias, variance, and mean squared error. Besides, we further present the corresponding calculation algorithms and numerical simulations to illustrate our theoretical achievements.  相似文献   

18.
Based on progressively Type-I interval censored sample, the problem of estimating unknown parameters of a two parameter generalized half-normal(GHN) distribution is considered. Different methods of estimation are discussed. They include the maximum likelihood estimation, midpoint approximation method, approximate maximum likelihood estimation, method of moments, and estimation based on probability plot. Several Bayesian estimates with respect to different symmetric and asymmetric loss functions such as squared error, LINEX, and general entropy is calculated. The Lindley’s approximation method is applied to determine Bayesian estimates. Monte Carlo simulations are performed to compare the performances of the different methods. Finally, analysis is also carried out for a real dataset.  相似文献   

19.
Some statistical models defined in terms of a generating stochastic mechanism have intractable distribution theory, which renders parameter estimation difficult. However, a Monte Carlo estimate of the log-likelihood surface for such a model can be obtained via computation of nonparametric density estimates from simulated realizations of the model. Unfortunately, the bias inherent in density estimation can cause bias in the resulting log-likelihood estimate that alters the location of its maximizer. In this paper a methodology for radically reducing this bias is developed for models with an additive error component. An illustrative example involving a stochastic model of molecular fragmentation and measurement is given.  相似文献   

20.
The problem of estimating unknown parameters and reliability function of a two parameter Burr type XII distribution is considered on the basis of a progressively type II censored sample. Several Bayesian estimates are obtained against different symmetric and asymmetric loss functions such as squared error, linex and general entropy. These Bayesian estimates are evaluated by applying the Lindley approximation method. Using simulations, all Bayesian estimates are compared with the corresponding maximum likelihood estimates numerically in terms of their mean square error values and some specific comments are made. Finally, two data sets are analyzed for the purpose of illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号