首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Relative risks (RRs) are often considered as preferred measures of association in randomized controlled trials especially when the binary outcome of interest is common. To directly estimate RRs, log-binomial regression has been recommended. Although log-binomial regression is a special case of generalized linear models, it does not respect the natural parameter constraints, and maximum likelihood estimation is often subject to numerical instability that leads to convergence problems. Alternative methods for solving log-binomial regression convergence problems have been proposed. A Bayesian approach also was introduced, but the comparison between this method and frequentist methods has not been fully explored. We compared five frequentist and one Bayesian methods for estimating RRs under a variety of scenario. Based on our simulation study, there is not a method that can perform well based on different statistical properties, but COPY 1000 and modified log-Poisson regression can be considered in practice.  相似文献   

2.
Maximum likelihood (ML) estimation of relative risks via log-binomial regression requires a restricted parameter space. Computation via non linear programming is simple to implement and has high convergence rate. We show that the optimization problem is well posed (convex domain and convex objective) and provide a variance formula along with a methodology for obtaining standard errors and prediction intervals which account for estimates on the boundary of the parameter space. We performed simulations under several scenarios already used in the literature in order to assess the performance of ML and of two other common estimation methods.  相似文献   

3.
The authors explore likelihood‐based methods for making inferences about the components of variance in a general normal mixed linear model. In particular, they use local asymptotic approximations to construct confidence intervals for the components of variance when the components are close to the boundary of the parameter space. In the process, they explore the question of how to profile the restricted likelihood (REML). Also, they show that general REML estimates are less likely to fall on the boundary of the parameter space than maximum‐likelihood estimates and that the likelihood‐ratio test based on the local asymptotic approximation has higher power than the likelihood‐ratio test based on the usual chi‐squared approximation. They examine the finite‐sample properties of the proposed intervals by means of a simulation study.  相似文献   

4.
The paper establishes the asymptotic distribution of the conditional maximum likelihood estimator for integer-valued generalized autoregressive conditional heteroskedastic (INGARCH) processes of conditional negative binomial distributions, with the number of successes in the definition of the negative binomial distribution being assumed to be known, when the true parameter is at the boundary of the parameter space. Based on the result, coefficient nullity tests are developed for model simplification. The proposed tests are investigated through a simulation study.  相似文献   

5.
The Weibull distribution is composited with Pareto model to obtain a flexible, reliable long-tailed parametric distribution for modeling unimodal failure rate data. The hazard function of the composite family accommodates decreasing and unimodal failure rates, which are separated by the boundary line of the space of shape parameter, gamma, when it equals to a known constant. The least square and maximum likelihood parameter estimation techniques are discussed. The advantages of using the proposed family are demonstrated and compared by illustrating well-known examples: guinea pigs survival time data, head and neck cancer data, and nasopharynx cancer survival data.  相似文献   

6.
We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.  相似文献   

7.
Relative risks are often considered preferable to odds ratios for quantifying the association between a predictor and a binary outcome. Relative risk regression is an alternative to logistic regression where the parameters are relative risks rather than odds ratios. It uses a log link binomial generalised linear model, or log‐binomial model, which requires parameter constraints to prevent probabilities from exceeding 1. This leads to numerical problems with standard approaches for finding the maximum likelihood estimate (MLE), such as Fisher scoring, and has motivated various non‐MLE approaches. In this paper we discuss the roles of the MLE and its main competitors for relative risk regression. It is argued that reliable alternatives to Fisher scoring mean that numerical issues are no longer a motivation for non‐MLE methods. Nonetheless, non‐MLE methods may be worthwhile for other reasons and we evaluate this possibility for alternatives within a class of quasi‐likelihood methods. The MLE obtained using a reliable computational method is recommended, but this approach requires bootstrapping when estimates are on the parameter space boundary. If convenience is paramount, then quasi‐likelihood estimation can be a good alternative, although parameter constraints may be violated. Sensitivity to model misspecification and outliers is also discussed along with recommendations and priorities for future research.  相似文献   

8.
Accurate moments of maximum likelihood and moment estimators for the scale and shape parameters of a two parameter gamma density are given, the former being tabulated over a segment of the parameter space. In addition, joint acceptance regions are given for a particular case. The three parameter model is also considered and comments made on second order asymptotics for the maximum likelihood estimators  相似文献   

9.
The problem of determining minimum sample size for the estimation of a binomial parameter with prescribed margin of error and confidence level is considered. It is assumed that available auxiliary information allows to restrict the parameter space to some interval whose left boundary is above zero. A range-preserving estimator resulting from the conditional maximization of the likelihood function is considered. A method for exact computation of minimum sample size controlling for the relative error is proposed. Several tables of minimum sample sizes for typical situations are also presented. The range-preserving estimator achieves the same precision and confidence level as the unrestricted maximum likelihood estimator but with a smaller sample.  相似文献   

10.
Consider the Lehmann model with time-dependent covariates, which is different from Cox’s model. We find out that (1) the parameter space for β under the Lehmann model is restricted, and the maximum point of the parametric likelihood for β may lie outside the parameter space; (2) for some particular time-dependent covariate, under the standard generalized likelihood the semiparametric maximum likelihood estimator (SMLE) is inconsistent and we propose a modified generalized likelihood which leads to the consistent SMLE.  相似文献   

11.
It is well known that the normal mixture with unequal variance has unbounded likelihood and thus the corresponding global maximum likelihood estimator (MLE) is undefined. One of the commonly used solutions is to put a constraint on the parameter space so that the likelihood is bounded and then one can run the EM algorithm on this constrained parameter space to find the constrained global MLE. However, choosing the constraint parameter is a difficult issue and in many cases different choices may give different constrained global MLE. In this article, we propose a profile log likelihood method and a graphical way to find the maximum interior mode. Based on our proposed method, we can also see how the constraint parameter, used in the constrained EM algorithm, affects the constrained global MLE. Using two simulation examples and a real data application, we demonstrate the success of our new method in solving the unboundness of the mixture likelihood and locating the maximum interior mode.  相似文献   

12.
Summary.  Log-linear models for multiway contingency tables where one variable is subject to non-ignorable non-response will often yield boundary solutions, with the probability of non-respondents being classified in some cells of the table estimated as 0. The paper considers the effect of this non-standard behaviour on two methods of interval estimation based on the distribution of the maximum likelihood estimator. The first method relies on the estimator being approximately normally distributed with variance equal to the inverse of the information matrix. It is shown that the information matrix is singular for boundary solutions, but intervals can be calculated after a simple transformation. For the second method, based on the bootstrap, asymptotic results suggest that the coverage properties may be poor for boundary solutions. Both methods are compared with profile likelihood intervals in a simulation study based on data from the British General Election Panel Study. The results of this study indicate that all three methods perform poorly for a parameter of the non-response model, whereas they all perform well for a parameter of the margin model, irrespective of whether or not there is a boundary solution.  相似文献   

13.
The model parameters of linear state space models are typically estimated with maximum likelihood estimation, where the likelihood is computed analytically with the Kalman filter. Outliers can deteriorate the estimation. Therefore we propose an alternative estimation method. The Kalman filter is replaced by a robust version and the maximum likelihood estimator is robustified as well. The performance of the robust estimator is investigated in a simulation study. Robust estimation of time varying parameter regression models is considered as a special case. Finally, the methodology is applied to real data.  相似文献   

14.
The established general results on convergence properties of the EM algorithm require the sequence of EM parameter estimates to fall in the interior of the parameter space over which the likelihood is being maximized. This paper presents convergence properties of the EM sequence of likelihood values and parameter estimates in constrained parameter spaces for which the sequence of EM parameter estimates may converge to the boundary of the constrained parameter space contained in the interior of the unconstrained parameter space. Examples of the behavior of the EM algorithm applied to such parameter spaces are presented.  相似文献   

15.
Summary.  Penalized regression spline models afford a simple mixed model representation in which variance components control the degree of non-linearity in the smooth function estimates. This motivates the study of lack-of-fit tests based on the restricted maximum likelihood ratio statistic which tests whether variance components are 0 against the alternative of taking on positive values. For this one-sided testing problem a further complication is that the variance component belongs to the boundary of the parameter space under the null hypothesis. Conditions are obtained on the design of the regression spline models under which asymptotic distribution theory applies, and finite sample approximations to the asymptotic distribution are provided. Test statistics are studied for simple as well as multiple-regression models.  相似文献   

16.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   

17.
Population-parameter mapping (PPM) is a method for estimating the parameters of latent scientific models that describe the statistical likelihood function. The PPM method involves a Bayesian inference in terms of the statistical parameters and the mapping from the statistical parameter space to the parameter space of the latent scientific parameters, and obtains a model coherence estimate, P(coh). The P(coh) statistic can be valuable for designing experiments, comparing competing models, and can be helpful in redesigning flawed models. Examples are provided where greater estimation precision was found for small sample sizes for the PPM point estimates relative to the maximum likelihood estimator (MLE).  相似文献   

18.
Exponential regression model is important in analyzing data from heterogeneous populations. In this paper we propose a simple method to estimate the regression parameters using binary data. Under certain design distributions, including ellipticaily symmetric distributions, for the explanatory variables, the estimators are shown to be consistent and asymptotically normal when sample size is large. For finite samples, the new estimates were shown to behave reasonably well. They are competitive with the maximum likelihood estimates and more importantly, according to our simulation results, the cost of CPU time for computing new estimates is only 1/7 of that required for computing the usual maximum likelihood estimates. We expect the savings in CPU time would be more dramatic with larger dimension of the regression parameter space.  相似文献   

19.
The Fay–Herriot model is a linear mixed model that plays a relevant role in small area estimation (SAE). Under the SAE set-up, tools for selecting an adequate model are required. Applied statisticians are often interested on deciding if it is worthwhile to use a mixed effect model instead of a simpler fixed-effect model. This problem is not standard because under the null hypothesis the random effect variance is on the boundary of the parameter space. The likelihood ratio test and the residual likelihood ratio test are proposed and their finite sample distributions are derived. Finally, we analyse their behaviour under simulated scenarios and we also apply them to real data.  相似文献   

20.
The small sample properties of the score function approximation to the maximum likelihood estimator for the three-parameter lognormal distribution using an alternative parameterization are considered. The new set of parameters is a continuous function of the usual parameters. However, unlike with the usual parameterization, the score function technique for this parameterization is extremely insensitive to starting values. Further, it is shown that whenever the sample third moment is less than zero, a local maximum to the likelihood function exists at a boundary point. For the usual parameterization, this point is unattainable. However, the alternative parameter space can be expanded to include these boundary points. This procedure results in good estimates of the expected value, variance, extreme percentiles and other parameters of the distribution even in samples where, with the typical parameterization, the estimation procedure fails to converge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号