首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到13条相似文献,搜索用时 0 毫秒
1.
The approximate Bayesian computation (ABC) algorithm is used to estimate parameters from complicated phenomena, where likelihood is intractable. Here, we report the development of an algorithm to choose the tolerance level for ABC. We have illustrated the performance of our proposed method by simulating the estimation of scaled mutation and recombination rates. The result shows that the proposed algorithm performs well.  相似文献   

2.
The regression function R(?) to be estimated is assumed to have an expansion in terms of specified functions, orthogonalized vich respect to values of the explanatory variable. Relative precisions of OBSERVATION are assumed known. The estimate is the posterior linear mean of R(?) given the data. The investigator plots graphs of appropriate functions as an aid in eliciting his prior means and precisions for the coefficients in the expansion. The method is illustrated by an example using simulated data, an example in which effects of various dosages of Vitamin D are estimated, and an example in which a utility function is estimated.  相似文献   

3.
This study compares the SPSS ordinary least squares (OLS) regression and ridge regression procedures in dealing with multicollinearity data. The LS regression method is one of the most frequently applied statistical procedures in application. It is well documented that the LS method is extremely unreliable in parameter estimation while the independent variables are dependent (multicollinearity problem). The Ridge Regression procedure deals with the multicollinearity problem by introducing a small bias in the parameter estimation. The application of Ridge Regression involves the selection of a bias parameter and it is not clear if it works better in applications. This study uses a Monte Carlo method to compare the results of OLS procedure with the Ridge Regression procedure in SPSS.  相似文献   

4.
Most methods for survival prediction from high-dimensional genomic data combine the Cox proportional hazards model with some technique of dimension reduction, such as partial least squares regression (PLS). Applying PLS to the Cox model is not entirely straightforward, and multiple approaches have been proposed. The method of Park et al. (Bioinformatics 18(Suppl. 1):S120–S127, 2002) uses a reformulation of the Cox likelihood to a Poisson type likelihood, thereby enabling estimation by iteratively reweighted partial least squares for generalized linear models. We propose a modification of the method of park et al. (2002) such that estimates of the baseline hazard and the gene effects are obtained in separate steps. The resulting method has several advantages over the method of park et al. (2002) and other existing Cox PLS approaches, as it allows for estimation of survival probabilities for new patients, enables a less memory-demanding estimation procedure, and allows for incorporation of lower-dimensional non-genomic variables like disease grade and tumor thickness. We also propose to combine our Cox PLS method with an initial gene selection step in which genes are ordered by their Cox score and only the highest-ranking k% of the genes are retained, obtaining a so-called supervised partial least squares regression method. In simulations, both the unsupervised and the supervised version outperform other Cox PLS methods.  相似文献   

5.
Bayesian selection of variables is often difficult to carry out because of the challenge in specifying prior distributions for the regression parameters for all possible models, specifying a prior distribution on the model space and computations. We address these three issues for the logistic regression model. For the first, we propose an informative prior distribution for variable selection. Several theoretical and computational properties of the prior are derived and illustrated with several examples. For the second, we propose a method for specifying an informative prior on the model space, and for the third we propose novel methods for computing the marginal distribution of the data. The new computational algorithms only require Gibbs samples from the full model to facilitate the computation of the prior and posterior model probabilities for all possible models. Several properties of the algorithms are also derived. The prior specification for the first challenge focuses on the observables in that the elicitation is based on a prior prediction y 0 for the response vector and a quantity a 0 quantifying the uncertainty in y 0. Then, y 0 and a 0 are used to specify a prior for the regression coefficients semi-automatically. Examples using real data are given to demonstrate the methodology.  相似文献   

6.
A class of trimmed linear conditional estimators based on regression quantiles for the linear regression model is introduced. This class serves as a robust analogue of non-robust linear unbiased estimators. Asymptotic analysis then shows that the trimmed least squares estimator based on regression quantiles ( Koenker and Bassett ( 1978 ) ) is the best in this estimator class in terms of asymptotic covariance matrices. The class of trimmed linear conditional estimators contains the Mallows-type bounded influence trimmed means ( see De Jongh et al ( 1988 ) ) and trimmed instrumental variables estimators. A large sample methodology based on trimmed instrumental variables estimator for confidence ellipsoids and hypothesis testing is also provided.  相似文献   

7.
Consider a partially linear regression model with an unknown vector parameter β, an unknown functiong(·), and unknown heteroscedastic error variances. In this paper we develop an asymptotic semiparametric generalized least squares estimation theory under some weak moment conditions. These moment conditions are satisfied by many of the error distributions encountered in practice, and our theory does not require the number of replications to go to infinity.  相似文献   

8.
Several approaches have been suggested for fitting linear regression models to censored data. These include Cox's propor­tional hazard models based on quasi-likelihoods. Methods of fitting based on least squares and maximum likelihoods have also been proposed. The methods proposed so far all require special purpose optimization routines. We describe an approach here which requires only a modified standard least squares routine.

We present methods for fitting a linear regression model to censored data by least squares and method of maximum likelihood. In the least squares method, the censored values are replaced by their expectations, and the residual sum of squares is minimized. Several variants are suggested in the ways in which the expect­ation is calculated. A parametric (assuming a normal error model) and two non-parametric approaches are described. We also present a method for solving the maximum likelihood equations in the estimation of the regression parameters in the censored regression situation. It is shown that the solutions can be obtained by a recursive algorithm which needs only a least squares routine for optimization. The suggested procesures gain considerably in computational officiency. The Stanford Heart Transplant data is used to illustrate the various methods.  相似文献   

9.
A gamma regression model with an exponential link function for the means Is considered. Moment properties of the deviance statistics based on maximum likelihood and weighted least squares fits are used to define modified deviance statistics which provide alternative global goodness of fit tests. The null distribution properties of the deviances and modified deviances are compared with those of the approximating chi-square distribution and It is shown that the use of the modified deviances gives much better control over the significance levels of the tests.  相似文献   

10.
Quantile regression (QR) allows one to model the effect of covariates across the entire response distribution, rather than only at the mean, but QR methods have been almost exclusively applied to continuous response variables and without considering spatial effects. Of the few studies that have performed QR on count data, none have included random spatial effects, which is an integral facet of the Bayesian spatial QR model for areal counts that we propose. Additionally, we introduce a simplifying alternative to the response variable transformation currently employed in the QR for counts literature. The efficacy of the proposed model is demonstrated via simulation study and on a real data application from the Texas Department of Family and Protective Services (TDFPS). Our model outperforms a comparable non-spatial model in both instances, as evidenced by the deviance information criterion (DIC) and coverage probabilities. With the TDFPS data, we identify one of four covariates, along with the intercept, as having a nonconstant effect across the response distribution.  相似文献   

11.
Hierarchical study design often occurs in many areas such as epidemiology, psychology, sociology, public health, engineering, and agriculture. This imposes correlation in data structure that needs to be account for in modelling process. In this study, a three-level mixed-effects least squares support vector regression (MLS-SVR) model is proposed to extend the standard least squares support vector regression (LS-SVR) model for handling cluster correlated data. The MLS-SVR model incorporates multiple random effects which allow handling unequal number of observations for each case at non-fixed time points (a very unbalanced situation) and correlation between subjects simultaneously. The methodology consists of a regression modelling step that is performed straightforwardly by solving a linear system. The proposed model is illustrated through numerical studies on simulated data sets and a real data example on human Brucellosis frequency. The generalization performance of the proposed MLS-SVR is evaluated by comparing to ordinary LS-SVR and some other parametric models.  相似文献   

12.
We study nonlinear least-squares problem that can be transformed to linear problem by change of variables. We derive a general formula for the statistically optimal weights and prove that the resulting linear regression gives an optimal estimate (which satisfies an analogue of the Rao-Cramer lower bound) in the limit of small noise.  相似文献   

13.
Summary.  We present an approach for correcting for interobserver measurement error in an ordinal logistic regression model taking into account also the variability of the estimated correction terms. The different scoring behaviour of the 16 examiners complicated the identification of a geographical trend in a recent study on caries experience in Flemish children (Belgium) who were 7 years old. Since the measurement error is on the response the factor 'examiner' could be included in the regression model to correct for its confounding effect. However, controlling for examiner largely removed the geographical east–west trend. Instead, we suggest a (Bayesian) ordinal logistic model which corrects for the scoring error (compared with a gold standard) using a calibration data set. The marginal posterior distribution of the regression parameters of interest is obtained by integrating out the correction terms pertaining to the calibration data set. This is done by processing two Markov chains sequentially, whereby one Markov chain samples the correction terms. The sampled correction term is imputed in the Markov chain pertaining to the regression parameters. The model was fitted to the oral health data of the Signal–Tandmobiel® study. A WinBUGS program was written to perform the analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号