首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Models for repeated measures or growth curves consist of a mean response plus error and the errors are usually correlated. Both maximum likelihood and residual maximum likelihood (REML) estimators of a regression model with dependent errors are derived for cases in which the variance matrix of the error model admits a convenient Cholesky factorisation. This factorisation may be linked to methods for producing recursive estimates of the regression parameters and recursive residuals to provide a convenient computational method. The method is used to develop a general approach to repeated measures analysis.  相似文献   

2.
Summary A simple procedure for numerical solution of the likelihood equations for estimating the regression parameters of a first-order response surface model for the treatment parameters of mixture paired comparison experiments is developed. It is demonstrated that, for defined rotatable designs, those regression parameters are simple functions of the main effect parameters of a corresponding factorial model with no interactions. The maximum likelihood estimators of those main effect parameters, and hence of their corresponding regression parameters, are obtained through using procedures of treatment contrasts, factorial and iterations. A numerical example is given to illustrate applications of the procedures developed in this paper.  相似文献   

3.
Mean survival time is often of inherent interest in medical and epidemiologic studies. In the presence of censoring and when covariate effects are of interest, Cox regression is the strong default, but mostly due to convenience and familiarity. When survival times are uncensored, covariate effects can be estimated as differences in mean survival through linear regression. Tobit regression can validly be performed through maximum likelihood when the censoring times are fixed (ie, known for each subject, even in cases where the outcome is observed). However, Tobit regression is generally inapplicable when the response is subject to random right censoring. We propose Tobit regression methods based on weighted maximum likelihood which are applicable to survival times subject to both fixed and random censoring times. Under the proposed approach, known right censoring is handled naturally through the Tobit model, with inverse probability of censoring weighting used to overcome random censoring. Essentially, the re‐weighting data are intended to represent those that would have been observed in the absence of random censoring. We develop methods for estimating the Tobit regression parameter, then the population mean survival time. A closed form large‐sample variance estimator is proposed for the regression parameter estimator, with a semiparametric bootstrap standard error estimator derived for the population mean. The proposed methods are easily implementable using standard software. Finite‐sample properties are assessed through simulation. The methods are applied to a large cohort of patients wait‐listed for kidney transplantation.  相似文献   

4.
In this paper, we propose and develop a doubly restricted exponential dispersion model, i.e. a varying dispersion generalized linear model with two sets of restrictions, a set of linear restrictions for the mean response, and at the same time, for another set of linear restrictions for the dispersion of the distribution. This model would be useful to consider several situations where it is necessary to control/analyze drug-doses, active effects in factorial experiments, mean-variance relationships, among other situations. A penalized likelihood function is proposed and developed in order to achieve the restricted parameters and to develop the inferential results. Several special cases from the literature are commented on. A simply restricted varying dispersion beta regression model is exemplified by means of real and simulated data. Satisfactory and promising results are found.  相似文献   

5.
The present article discusses alternative regression models and estimation methods for dealing with multivariate fractional response variables. Both conditional mean models, estimable by quasi-maximum likelihood, and fully parametric models (Dirichlet and Dirichlet-multinomial), estimable by maximum likelihood, are considered. A new parameterization is proposed for the parametric models, which accommodates the most common specifications for the conditional mean (e.g., multinomial logit, nested logit, random parameters logit, dogit). The text also discusses at some length the specification analysis of fractional regression models, proposing several tests that can be performed through artificial regressions. Finally, an extensive Monte Carlo study evaluates the finite sample properties of most of the estimators and tests considered.  相似文献   

6.
A multivariate generalized Poisson regression model based on the multivariate generalized Poisson distribution is defined and studied. The regression model can be used to describe a count data with any type of dispersion. The model allows for both positive and negative correlation between any pair of the response variables. The parameters of the regression model are estimated by using the maximum likelihood method. Some test statistics are discussed, and two numerical data sets are used to illustrate the applications of the multivariate count data regression model.  相似文献   

7.

Ordinal data are often modeled using a continuous latent response distribution, which is partially observed through windows of adjacent intervals defined by cutpoints. In this paper we propose the beta distribution as a model for the latent response. The beta distribution has several advantages over the other common distributions used, e.g. , normal and logistic. In particular, it enables separate modeling of location and dispersion effects which is essential in the Taguchi method of robust design. First, we study the problem of estimating the location and dispersion parameters of a single beta distribution (representing a single treatment) from ordinal data assuming known equispaced cutpoints. Two methods of estimation are compared: the maximum likelihood method and the method of moments. Two methods of treating the data are considered: in raw discrete form and in smoothed continuousized form. A large scale simulation study is carried out to compare the different methods. The mean square errors of the estimates are obtained under a variety of parameter configurations. Comparisons are made based on the ratios of the mean square errors (called the relative efficiencies). No method is universally the best, but the maximum likelihood method using continuousized data is found to perform generally well, especially for estimating the dispersion parameter. This method is also computationally much faster than the other methods and does not experience convergence difficulties in case of sparse or empty cells. Next, the problem of estimating unknown cutpoints is addressed. Here the multiple treatments setup is considered since in an actual application, cutpoints are common to all treatments, and must be estimated from all the data. A two-step iterative algorithm is proposed for estimating the location and dispersion parameters of the treatments, and the cutpoints. The proposed beta model and McCullagh's (1980) proportional odds model are compared by fitting them to two real data sets.  相似文献   

8.
In this paper we address the problem of estimating a vector of regression parameters in the Weibull censored regression model. Our main objective is to provide natural adaptive estimators that significantly improve upon the classical procedures in the situation where some of the predictors may or may not be associated with the response. In the context of two competing Weibull censored regression models (full model and candidate submodel), we consider an adaptive shrinkage estimation strategy that shrinks the full model maximum likelihood estimate in the direction of the submodel maximum likelihood estimate. We develop the properties of these estimators using the notion of asymptotic distributional risk. The shrinkage estimators are shown to have higher efficiency than the classical estimators for a wide class of models. Further, we consider a LASSO type estimation strategy and compare the relative performance with the shrinkage estimators. Monte Carlo simulations reveal that when the true model is close to the candidate submodel, the shrinkage strategy performs better than the LASSO strategy when, and only when, there are many inactive predictors in the model. Shrinkage and LASSO strategies are applied to a real data set from Veteran's administration (VA) lung cancer study to illustrate the usefulness of the procedures in practice.  相似文献   

9.
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)’s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.  相似文献   

10.
In this article, we propose a beta regression model with multiplicative log-normal measurement errors. Three estimation methods are presented, namely, naive, calibration regression, and pseudo likelihood. The nuisance parameters are estimated from a system of estimation equations using replicated data and these estimates are used to propose a pseudo likelihood function. A simulation study was performed to assess some properties of the proposed methods. Results from an example with a real dataset, including diagnostic tools, are also reported.  相似文献   

11.
We propose a flexible semiparametric stochastic mixed effects model for bivariate cyclic longitudinal data. The model can handle either single cycle or, more generally, multiple consecutive cycle data. The approach models the mean of responses by parametric fixed effects and a smooth nonparametric function for the underlying time effects, and the relationship across the bivariate responses by a bivariate Gaussian random field and a joint distribution of random effects. The proposed model not only can model complicated individual profiles, but also allows for more flexible within-subject and between-response correlations. The fixed effects regression coefficients and the nonparametric time functions are estimated using maximum penalized likelihood, where the resulting estimator for the nonparametric time function is a cubic smoothing spline. The smoothing parameters and variance components are estimated simultaneously using restricted maximum likelihood. Simulation results show that the parameter estimates are close to the true values. The fit of the proposed model on a real bivariate longitudinal dataset of pre-menopausal women also performs well, both for a single cycle analysis and for a multiple consecutive cycle analysis. The Canadian Journal of Statistics 48: 471–498; 2020 © 2020 Statistical Society of Canada  相似文献   

12.
Inference for a generalized linear model is generally performed using asymptotic approximations for the bias and the covariance matrix of the parameter estimators. For small experiments, these approximations can be poor and result in estimators with considerable bias. We investigate the properties of designs for small experiments when the response is described by a simple logistic regression model and parameter estimators are to be obtained by the maximum penalized likelihood method of Firth [Firth, D., 1993, Bias reduction of maximum likelihood estimates. Biometrika, 80, 27–38]. Although this method achieves a reduction in bias, we illustrate that the remaining bias may be substantial for small experiments, and propose minimization of the integrated mean square error, based on Firth's estimates, as a suitable criterion for design selection. This approach is used to find locally optimal designs for two support points.  相似文献   

13.
The generalized Poisson (GP) regression model has been used to model count data that exhibit over-dispersion or under-dispersion. The zero-inflated GP (ZIGP) regression model can additionally handle count data characterized by many zeros. However, the parameters of ZIGP model cannot easily be used for inference on overall exposure effects. In order to address this problem, a marginalized ZIGP is proposed to directly model the population marginal mean count. The parameters of the marginalized zero-inflated GP model are estimated by the method of maximum likelihood. The regression model is illustrated by three real-life data sets.  相似文献   

14.
The Poisson regression model (PRM) is employed in modelling the relationship between a count variable (y) and one or more explanatory variables. The parameters of PRM are popularly estimated using the Poisson maximum likelihood estimator (PMLE). There is a tendency that the explanatory variables grow together, which results in the problem of multicollinearity. The variance of the PMLE becomes inflated in the presence of multicollinearity. The Poisson ridge regression (PRRE) and Liu estimator (PLE) have been suggested as an alternative to the PMLE. However, in this study, we propose a new estimator to estimate the regression coefficients for the PRM when multicollinearity is a challenge. We perform a simulation study under different specifications to assess the performance of the new estimator and the existing ones. The performance was evaluated using the scalar mean square error criterion and the mean squared error prediction error. The aircraft damage data was adopted for the application study and the estimators’ performance judged by the SMSE and the mean squared prediction error. The theoretical comparison shows that the proposed estimator outperforms other estimators. This is further supported by the simulation study and the application result.KEYWORDS: Poisson regression model, Poisson maximum likelihood estimator, multicollinearity, Poisson ridge regression, Liu estimator, simulation  相似文献   

15.
This article examines methods to efficiently estimate the mean response in a linear model with an unknown error distribution under the assumption that the responses are missing at random. We show how the asymptotic variance is affected by the estimator of the regression parameter, and by the imputation method. To estimate the regression parameter, the ordinary least squares is efficient only if the error distribution happens to be normal. If the errors are not normal, then we propose a one step improvement estimator or a maximum empirical likelihood estimator to efficiently estimate the parameter.To investigate the imputation’s impact on the estimation of the mean response, we compare the listwise deletion method and the propensity score method (which do not use imputation at all), and two imputation methods. We demonstrate that listwise deletion and the propensity score method are inefficient. Partial imputation, where only the missing responses are imputed, is compared to full imputation, where both missing and non-missing responses are imputed. Our results reveal that, in general, full imputation is better than partial imputation. However, when the regression parameter is estimated very poorly, the partial imputation will outperform full imputation. The efficient estimator for the mean response is the full imputation estimator that utilizes an efficient estimator of the parameter.  相似文献   

16.
We define the odd log-logistic exponential Gaussian regression with two systematic components, which extends the heteroscedastic Gaussian regression and it is suitable for bimodal data quite common in the agriculture area. We estimate the parameters by the method of maximum likelihood. Some simulations indicate that the maximum-likelihood estimators are accurate. The model assumptions are checked through case deletion and quantile residuals. The usefulness of the new regression model is illustrated by means of three real data sets in different areas of agriculture, where the data present bimodality.  相似文献   

17.
The paper considers generalized maximum likelihood asymptotic power one tests which aim to detect a change point in logistic regression when the alternative specifies that a change occurred in parameters of the model. A guaranteed non-asymptotic upper bound for the significance level of each of the tests is presented. For cases in which the test supports the conclusion that there was a change point, we propose a maximum likelihood estimator of that point and present results regarding the asymptotic properties of the estimator. An important field of application of this approach is occupational medicine, where for a lot chemical compounds and other agents, so-called threshold limit values (or TLVs) are specified.We demonstrate applications of the test and the maximum likelihood estimation of the change point using an actual problem that was encountered with real data.  相似文献   

18.
Relative risks are often considered preferable to odds ratios for quantifying the association between a predictor and a binary outcome. Relative risk regression is an alternative to logistic regression where the parameters are relative risks rather than odds ratios. It uses a log link binomial generalised linear model, or log‐binomial model, which requires parameter constraints to prevent probabilities from exceeding 1. This leads to numerical problems with standard approaches for finding the maximum likelihood estimate (MLE), such as Fisher scoring, and has motivated various non‐MLE approaches. In this paper we discuss the roles of the MLE and its main competitors for relative risk regression. It is argued that reliable alternatives to Fisher scoring mean that numerical issues are no longer a motivation for non‐MLE methods. Nonetheless, non‐MLE methods may be worthwhile for other reasons and we evaluate this possibility for alternatives within a class of quasi‐likelihood methods. The MLE obtained using a reliable computational method is recommended, but this approach requires bootstrapping when estimates are on the parameter space boundary. If convenience is paramount, then quasi‐likelihood estimation can be a good alternative, although parameter constraints may be violated. Sensitivity to model misspecification and outliers is also discussed along with recommendations and priorities for future research.  相似文献   

19.
Kalman filtering techniques are widely used by engineers to recursively estimate random signal parameters which are essentially coefficients in a large-scale time series regression model. These Bayesian estimators depend on the values assumed for the mean and covariance parameters associated with the initial state of the random signal. This paper considers a likelihood approach to estimation and tests of hypotheses involving the critical initial means and covariances. A computationally simple convergent iterative algorithm is used to generate estimators which depend only on standard Kalman filter outputs at each successive stage. Conditions are given under which the maximum likelihood estimators are consistent and asymptotically normal. The procedure is illustrated using a typical large-scale data set involving 10-dimensional signal vectors.  相似文献   

20.
The properties of a method of estimating the ratio of parameters for ordered categorical response regression models are discussed. If the link function relating the response variable to the linear combination of covariates is unknown then it is only possible to estimate the ratio of regression parameters. This ratio of parameters has a substitutability or relative importance interpretation.

The maximum likelihood estimate of the ratio of parameters, assuming a logistic function (McCullagh, 1980), is found to have very small bias for a wide variety of true link functions. Further it is shown using Monte Carlo simulations that this maximum likelihood estimate, has good coverage properties, even if the link function is incorrectly specified. It is demonstrated that combining adjacent categories to make the response binary can result in an analysis which is appreciably less efficient. The size of the efficiency loss on, among other factors, the marginal distribution in the ordered categories  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号