首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 919 毫秒
1.
Several approaches have been suggested for fitting linear regression models to censored data. These include Cox's propor­tional hazard models based on quasi-likelihoods. Methods of fitting based on least squares and maximum likelihoods have also been proposed. The methods proposed so far all require special purpose optimization routines. We describe an approach here which requires only a modified standard least squares routine.

We present methods for fitting a linear regression model to censored data by least squares and method of maximum likelihood. In the least squares method, the censored values are replaced by their expectations, and the residual sum of squares is minimized. Several variants are suggested in the ways in which the expect­ation is calculated. A parametric (assuming a normal error model) and two non-parametric approaches are described. We also present a method for solving the maximum likelihood equations in the estimation of the regression parameters in the censored regression situation. It is shown that the solutions can be obtained by a recursive algorithm which needs only a least squares routine for optimization. The suggested procesures gain considerably in computational officiency. The Stanford Heart Transplant data is used to illustrate the various methods.  相似文献   

2.
Parameter estimates of a new distribution for the strength of brittle fibers and composite materials are considered. An algorithm for generating random numbers from the distribution is suggested. Two parameter estimation methods, one based on a simple least squares procedure and the other based on the maximum likelihood principle, are studied using Monte Carlo simulation. In most cases, the maximum likelihood estimators were found to have somewhat smaller root mean squared error and bias than the least squares estimators. However, the least squares estimates are generally good and provide useful initial values for the numerical iteration used to find the maximum likelihood estimates.  相似文献   

3.
Correlated survival data arise frequently in biomedical and epidemiologic research, because each patient may experience multiple events or because there exists clustering of patients or subjects, such that failure times within the cluster are correlated. In this paper, we investigate the appropriateness of the semi-parametric Cox regression and of the generalized estimating equations as models for clustered failure time data that arise from an epidemiologic study in veterinary medicine. The semi-parametric approach is compared with a proposed fully parametric frailty model. The frailty component is assumed to follow a gamma distribution. Estimates of the fixed covariates effects were obtained by maximizing the likelihood function, while an estimate of the variance component ( frailty parameter) was obtained from a profile likelihood construction.  相似文献   

4.
Relative risk frailty models are used extensively in analyzing clustered and/or recurrent time-to-event data. In this paper, Laplace’s approximation for integrals is applied to marginal distributions of data arising from parametric relative risk frailty models. Under regularity conditions, the approximate maximum likelihood estimators (MLE) are consistent with a rate of convergence that depends on both the number of subjects and number of members per subject. We compare the approximate MLE against alternative estimators using limited simulation and demonstrate the utility of Laplace’s approximation approach by analyzing U.S. patient waiting time to deceased kidney transplant data.  相似文献   

5.
Local maximum likelihood estimation is a nonparametric counterpart of the widely used parametric maximum likelihood technique. It extends the scope of the parametric maximum likelihood method to a much wider class of parametric spaces. Associated with this nonparametric estimation scheme is the issue of bandwidth selection and bias and variance assessment. This paper provides a unified approach to selecting a bandwidth and constructing confidence intervals in local maximum likelihood estimation. The approach is then applied to least squares nonparametric regression and to nonparametric logistic regression. Our experiences in these two settings show that the general idea outlined here is powerful and encouraging.  相似文献   

6.
We conducted confirmatory factor analysis (CFA) of responses (N=803) to a self‐reported measure of optimism, using full‐information estimation via adaptive quadrature (AQ), an alternative estimation method for ordinal data. We evaluated AQ results in terms of the number of iterations required to achieve convergence, model fit, parameter estimates, standard errors (SE), and statistical significance, across four link‐functions (logit, probit, log‐log, complimentary log‐log) using 3–10 and 20 quadrature points. We compared AQ results with those obtained using maximum likelihood, robust maximum likelihood, and robust diagonally weighted least‐squares estimation. Compared to the other two link‐functions, logit and probit not only produced fit statistics, parameters estimates, SEs, and levels of significance that varied less across numbers of quadrature points, but also fitted the data better and provided larger completely standardised loadings than did maximum likelihood and diagonally weighted least‐squares. Our findings demonstrate the viability of using full‐information AQ to estimate CFA models with real‐world ordinal data.  相似文献   

7.
A maximum likelihood solution is presented for analyzing data which arise from a linear model whose error term is assumed to have variance proportional to some unknown power of the response. An efficient iterative method for solving the likelihood equations is obtained which incoporates use of a transfomation to orthogonalize the two variance paramaters. Assessments of the method are made through simulations study and the results are compared with those of the ordinary least squares. Examples from the literature are included to illustrate the method and also to compare the results with a weighted least squares estimates.  相似文献   

8.
We propose bivariate Weibull regression model with heterogeneity (frailty or random effect) which is generated by Weibull distribution. We assume that the bivariate survival data follow bivariate Weibull of Hanagal (Econ Qual Control 19:83–90, 2004). There are some interesting situations like survival times in genetic epidemiology, dental implants of patients and twin births (both monozygotic and dizygotic) where genetic behavior (which is unknown and random) of patients follows a known frailty distribution. These are the situations which motivate to study this particular model. We propose two-stage maximum likelihood estimation for hierarchical likelihood in the proposed model. We present a small simulation study to compare these estimates with the true value of the parameters and it is observed that these estimates are very close to the true values of the parameters.  相似文献   

9.
Recent results in information theory, see Soofi (1996; 2001) for a review, include derivations of optimal information processing rules, including Bayes' theorem, for learning from data based on minimizing a criterion functional, namely output information minus input information as shown in Zellner (1988; 1991; 1997; 2002). Herein, solution post data densities for parameters are obtained and studied for cases in which the input information is that in (1) a likelihood function and a prior density; (2) only a likelihood function; and (3) neither a prior nor a likelihood function but only input information in the form of post data moments of parameters, as in the Bayesian method of moments approach. Then it is shown how optimal output densities can be employed to obtain predictive densities and optimal, finite sample structural coefficient estimates using three alternative loss functions. Such optimal estimates are compared with usual estimates, e.g., maximum likelihood, two-stage least squares, ordinary least squares, etc. Some Monte Carlo experimental results in the literature are discussed and implications for the future are provided.  相似文献   

10.
In this paper, we consider the problem of estimation of semi-linear regression models. Using invariance arguments, Bhowmik and King [2007. Maximal invariant likelihood based testing of semi-linear models. Statist. Papers 48, 357–383] derived the probability density function of the maximal invariant statistic for the non-linear component of these models. Using this density function as a likelihood function allows us to estimate these models in a two-step process. First the non-linear component parameters are estimated by maximising the maximal invariant likelihood function. Then the non-linear component, with the parameter values replaced by estimates, is treated as a regressor and ordinary least squares is used to estimate the remaining parameters. We report the results of a simulation study conducted to compare the accuracy of this approach with full maximum likelihood and maximum profile-marginal likelihood estimation. We find maximising the maximal invariant likelihood function typically results in less biased and lower variance estimates than those from full maximum likelihood.  相似文献   

11.
The bootstrap is a methodology for estimating standard errors. The idea is to use a Monte Carlo simulation experiment based on a nonparametric estimate of the error distribution. The main objective of this article is to demonstrate the use of the bootstrap to attach standard errors to coefficient estimates in a second-order autoregressive model fitted by least squares and maximum likelihood estimation. Additionally, a comparison of the bootstrap and the conventional methodology is made. As it turns out, the conventional asymptotic formulae (both the least squares and maximum likelihood estimates) for estimating standard errors appear to overestimate the true standard errors. But there are two problems:i. The first two observations y1 and y2 have been fixed, and ii. The residuals have not been inflated. After these two factors are considered in the trial and bootstrap experiment, both the conventional maximum likelihood and bootstrap estimates of the standard errors appear to be performing quite well.  相似文献   

12.
Modelling volatility in the form of conditional variance function has been a popular method mainly due to its application in financial risk management. Among others, we distinguish the parametric GARCH models and the nonparametric local polynomial approximation using weighted least squares or gaussian likelihood function. We introduce an alternative likelihood estimate of conditional variance and we show that substitution of the error density with its estimate yields similar asymptotic properties, that is, the proposed estimate is adaptive to the error distribution. Theoretical comparison with existing estimates reveals substantial gains in efficiency, especially if error distribution has fatter tails than Gaussian distribution. Simulated data confirm the theoretical findings while an empirical example demonstrates the gains of the proposed estimate.  相似文献   

13.
《Econometric Reviews》2013,32(2):203-215
Abstract

Recent results in information theory, see Soofi (1996; 2001) for a review, include derivations of optimal information processing rules, including Bayes' theorem, for learning from data based on minimizing a criterion functional, namely output information minus input information as shown in Zellner (1988; 1991; 1997; 2002). Herein, solution post data densities for parameters are obtained and studied for cases in which the input information is that in (1) a likelihood function and a prior density; (2) only a likelihood function; and (3) neither a prior nor a likelihood function but only input information in the form of post data moments of parameters, as in the Bayesian method of moments approach. Then it is shown how optimal output densities can be employed to obtain predictive densities and optimal, finite sample structural coefficient estimates using three alternative loss functions. Such optimal estimates are compared with usual estimates, e.g., maximum likelihood, two‐stage least squares, ordinary least squares, etc. Some Monte Carlo experimental results in the literature are discussed and implications for the future are provided.  相似文献   

14.
Abstract

Analysis of right-censored data is problematic due to infinite maximum likelihood estimates (MLE) and potentially biased estimates, especially for small numbers of events. Analyzing current-status data is especially troublesome because of the extreme loss of precision due to large failure intervals. We extend Firth’s method for regular parametric problems to current-status modeling with the Weibull distribution. Firth advocated a bias reduction method for MLE by systematically correcting the score equation. An advantage is that it is still applicable when the MLE does not exist. We present simulation studies and two illustrative analyses involving RFM mice lung tumor data.  相似文献   

15.
In this article, we proposed a new three-parameter probability distribution, called Topp–Leone normal, for modelling increasing failure rate data. The distribution is obtained by using Topp–Leone-X family of distributions with normal as a baseline model. The basic properties including moments, quantile function, stochastic ordering and order statistics are derived here. The estimation of unknown parameters is approached by the method of maximum likelihood, least squares, weighted least squares and maximum product spacings. An extensive simulation study is carried out to compare the long-run performance of the estimators. Applicability of the distribution is illustrated by means of three real data analyses over existing distributions.  相似文献   

16.
We consider the estimation of life length of people who were born in the seventeenth or eighteenth century in England. The data consist of a sequence of times of life events that is either ended by a time of death or is right-censored by an unobserved time of migration. We propose a semi parametric model for the data and use a maximum likelihood method to estimate the unknown parameters in this model. We prove the consistency of the maximum likelihood estimators and describe an algorithm to obtain the estimates numerically. We have applied the algorithm to data and the estimates found are presented.  相似文献   

17.
Methods for modelling overdispersed data are compared. These methods are considered to be of two kinds: a likelihood based approach and a method-of-moments based approach. The likelihood method facilitates computation of maximum likelihood estimates which can be obtained through the same algorithm as that of weighted least squares. The quasi-likelihood or moment approaches seem to be appropriate when severe overdispersion may be present. The comparisons are made via analyses of the Ames Salmonella Reverse Mutagenicity Assay (Margolin et a/., 1981) and a seed dataset (Crow-der, 1978).  相似文献   

18.
Acceptance of Arima processes as valuable univariate forecasting mechanisms is increasing. Maximum likelihood estimation of parameters is complicated, and least squares approximations are not always satisfactory. The singular vaiue decomposition is used here to determine numericaily accurate values of the likelihood function for a given set of parameter estimates. Suggestions for efficient computational search procedures of maximum likelihood estimators are made.  相似文献   

19.
The generalised least squares, maximum likelihood, Bain-Antle 1 and 2, and two mixed methods of estimating the parameters of the two-parameter Weibull distribution are compared. The comparison is made using (a) the observed relative efficiency of parameter estimates and (b) themean squared relative error in estimated quantiles, to summarize the results of 1000 simulated samples of sizes 10 and 25. The results are that: generalised least squares is the best method of estimating the shape parameter ß the best method of estimating the scale parameter a depends onthe size of ß for quantile estimation maximum likelihood is best Bain-Antle 2 is uniformly the worst of the methods.  相似文献   

20.
In this paper, the estimation of parameters for a three-parameter Weibull distribution based on progressively Type-II right censored sample is studied. Different estimation procedures for complete sample are generalized to the case with progressively censored data. These methods include the maximum likelihood estimators (MLEs), corrected MLEs, weighted MLEs, maximum product spacing estimators and least squares estimators. We also proposed the use of a censored estimation method with one-step bias-correction to obtain reliable initial estimates for iterative procedures. These methods are compared via a Monte Carlo simulation study in terms of their biases, root mean squared errors and their rates of obtaining reliable estimates. Recommendations are made from the simulation results and a numerical example is presented to illustrate all of the methods of inference developed here.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号