排序方式: 共有119条查询结果,搜索用时 15 毫秒
81.
Random coefficient model (RCM) is a powerful statistical tool in analyzing correlated data collected from studies with different clusters or from longitudinal studies. In practice, there is a need for statistical methods that allow biomedical researchers to adjust for the measured and unmeasured covariates that might affect the regression model. This article studies two nonparametric methods dealing with auxiliary covariate data in linear random coefficient models. We demonstrate how to estimate the coefficients of the models and how to predict the random effects when the covariates are missing or mismeasured. We employ empirical estimator and kernel smoother to handle a discrete and continuous auxiliary, respectively. Simulation results show that the proposed methods perform better than an alternative method that only uses data in the validation data set and ignores the random effects in the random coefficient model. 相似文献
82.
83.
Li Yin 《统计学通讯:理论与方法》2013,42(5):1080-1095
When estimating treatment effect on count outcome of given population, one uses different models in different studies, resulting in non-comparable measures of treatment effect. Here we show that the marginal rate differences in these studies are comparable measures of treatment effect. We estimate the marginal rate differences by log-linear models and show that their finite-sample maximum-likelihood estimates are unbiased and highly robust with respect to effects of dispersing covariates on outcome. We get approximate finite-sample distributions of these estimates by using the asymptotic normal distribution of estimates of the log-linear model parameters. This method can be easily applied to practice. 相似文献
84.
In this paper, we examine the performance of Anderson's classification statistic with covariate adjustment in comparison with the usual Anderson's classification statistic without covariate adjustment in a two-population normal covariate classification problem. The same problem has been investigated using different methods of comparison by some authors. See the bibliography. The aim of this paper is to give a direct comparison based upon the asymptotic probabilities of misclassification. It is shown that for large equal sample size of a training sample from each population, Anderson's classification statistic with covariate adjustment and cut-off point equal to zero, has better performance. 相似文献
85.
In the presence of covariates information, assuming the linear relationship between a transformation of survival time and covariates, we propose a new estimator of survival function and show its consistency. In addition, a comparison of the proposed estimator with the product-limit estimator introduced by Kaplan and Meier (1958) is performed through Monte Carlo simulation studies. We illustrate the proposed estimator with the updated Stanford heart transplant data. 相似文献
86.
Ganesh Dutta 《统计学通讯:理论与方法》2017,46(19):9703-9725
This is a continuation to Part I toward our efforts for providing illustrative examples in the context of analysis of covariance (ANCOVA) models and related analyses of data. We discuss four more examples here, and these are derived from standard textbooks. We re-visit these examples with a view to suggest optimal/nearly optimal designs for estimation of the covariate parameter(s). 相似文献
87.
Zhiping Qiu 《统计学通讯:理论与方法》2017,46(23):11575-11590
Missing covariate data are common in biomedical studies. In this article, by using the non parametric kernel regression technique, a new imputation approach is developed for the Cox-proportional hazard regression model with missing covariates. This method achieves the same efficiency as the fully augmented weighted estimators (Qi et al. 2005. Journal of the American Statistical Association, 100:1250) and has a simpler form. The asymptotic properties of the proposed estimator are derived and analyzed. The comparisons between the proposed imputation method and several other existing methods are conducted via a number of simulation studies and a mouse leukemia data. 相似文献
88.
Box–Cox power transformation is a commonly used methodology to transform the distribution of the data into a normal distribution. The methodology relies on a single transformation parameter. In this study, we focus on the estimation of this parameter. For this purpose, we employ seven popular goodness-of-fit tests for normality, namely Shapiro–Wilk, Anderson–Darling, Cramer-von Mises, Pearson Chi-square, Shapiro-Francia, Lilliefors and Jarque–Bera tests, together with a searching algorithm. The searching algorithm is based on finding the argument of the minimum or maximum depending on the test, i.e., maximum for the Shapiro–Wilk and Shapiro–Francia, minimum for the rest. The artificial covariate method of Dag et al. (2014) is also included for comparison purposes. Simulation studies are implemented to compare the performances of the methods. Results show that Shapiro–Wilk and the artificial covariate method are more effective than the others and Pearson Chi-square is the worst performing method. The methods are also applied to two real-life datasets. The R package AID is proposed for implementation of the aforementioned methods. 相似文献
89.
Tao Lu 《Journal of applied statistics》2017,44(13):2354-2367
Jointly modeling longitudinal and survival data has been an active research area. Most researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In the current study, we develop the joint models that concurrently accounting for longitudinal and survival data with multiple features. Specifically, the proposed model handles skewness, missingness and measurement errors in covariates which are typically observed in the collection of longitudinal survival data from many studies. We employ a Bayesian inferential method to make inference on the proposed model. We applied the proposed model to an real data study. A few alternative models under different conditions are compared. We conduct extensive simulations in order to evaluate how the method works. 相似文献
90.
Szu-Peng Yang 《统计学通讯:模拟与计算》2017,46(8):6083-6105
This paper adopts a Bayesian strategy for generalized ridge estimation for high-dimensional regression. We also consider significance testing based on the proposed estimator, which is useful for selecting regressors. Both theoretical and simulation studies show that the proposed estimator can simultaneously outperform the ordinary ridge estimator and the LSE in terms of the mean square error (MSE) criterion. The simulation study also demonstrates the competitive MSE performance of our proposal with the Lasso under sparse models. We demonstrate the method using the lung cancer data involving high-dimensional microarrays. 相似文献