首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Propensity score methods are increasingly used in medical literature to estimate treatment effect using data from observational studies. Despite many papers on propensity score analysis, few have focused on the analysis of survival data. Even within the framework of the popular proportional hazard model, the choice among marginal, stratified or adjusted models remains unclear. A Monte Carlo simulation study was used to compare the performance of several survival models to estimate both marginal and conditional treatment effects. The impact of accounting or not for pairing when analysing propensity‐score‐matched survival data was assessed. In addition, the influence of unmeasured confounders was investigated. After matching on the propensity score, both marginal and conditional treatment effects could be reliably estimated. Ignoring the paired structure of the data led to an increased test size due to an overestimated variance of the treatment effect. Among the various survival models considered, stratified models systematically showed poorer performance. Omitting a covariate in the propensity score model led to a biased estimation of treatment effect, but replacement of the unmeasured confounder by a correlated one allowed a marked decrease in this bias. Our study showed that propensity scores applied to survival data can lead to unbiased estimation of both marginal and conditional treatment effect, when marginal and adjusted Cox models are used. In all cases, it is necessary to account for pairing when analysing propensity‐score‐matched data, using a robust estimator of the variance. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Abstract.  In observational studies treatment may be adapted to the patient's state during the course of time. These covariates may in turn also react on the treatment under study, and so on. This makes it hard to distinguish between treatment effect and selection bias. Structural nested models aim at estimating treatment effect in such complicated situations, even when treatment may change at any time. We show that structural nested models can often be calculated with standard software, by using standard models to predict treatment as a tool to estimate treatment effect. Robins ( Survival analysis, Volume 6 of Encyclopedia of Biostatistics , John Wiley and Sons, Chichester, 1998) conjectured this, but so far it was unproven. We use a partial likelihood approach to choose the estimators and tests as a subclass of the estimators and tests in Lok (math. ST/0410271 at http://arXiv.org , 2004). We show that this is the class of estimators and tests that can be calculated with standard software. The estimators are consistent and asymptotically normal, and have interesting asymptotic properties.  相似文献   

3.
In randomized clinical trials (RCTs), we may come across the situation in which some patients do not fully comply with their assigned treatment. For an experimental treatment with trichotomous levels, we derive the maximum likelihood estimator (MLE) of the risk ratio (RR) per level of dose increase in a RCT with noncompliance. We further develop three asymptotic interval estimators for the RR. To evaluate and compare the finite sample performance of these interval estimators, we employ Monte Carlo simulation. When the number of patients per treatment is large, we find that all interval estimators derived in this paper can perform well. When the number of patients is not large, we find that the interval estimator using Wald’s statistic can be liberal, while the interval estimator using the logarithmic transformation of the MLE can lose precision. We note that use of a bootstrap variance estimate in this case may alleviate these concerns. We further note that an interval estimator combining interval estimators using Wald’s statistic and the logarithmic transformation can generally perform well with respect to the coverage probability, and be generally more efficient than interval estimators using bootstrap variance estimates when RR>1. Finally, we use the data taken from a study of vitamin A supplementation to reduce mortality in preschool children to illustrate the use of these estimators.  相似文献   

4.
Some multicenter randomized controlled trials (e.g. for rare diseases or with slow recruitment) involve many centers with few patients in each. Under within-center randomization, some centers might not assign each treatment to at least one patient; hence, such centers have no within-center treatment effect estimates and the center-stratified treatment effect estimate can be inefficient, perhaps to an extent with statistical and ethical implications. Recently, combining complete and incomplete centers with a priori weights has been suggested. However, a concern is whether using the incomplete centers increases bias. To study this concern, an approach with randomization models for a finite population was used to evaluate bias of the usual complete center estimator, the simple center-ignoring estimator, and the weighted estimator combining complete and incomplete centers. The situation with two treatments and many centers, each with either one or two patients, was evaluated. Various patient accrual mechanisms were considered, including one involving selection bias. The usual complete center estimator and the weighted estimator were unbiased under the overall null hypothesis, even with selection bias. An actual dermatology clinical trial motivates and illustrates these methods.  相似文献   

5.
In non‐randomized biomedical studies using the proportional hazards model, the data often constitute an unrepresentative sample of the underlying target population, which results in biased regression coefficients. The bias can be avoided by weighting included subjects by the inverse of their respective selection probabilities, as proposed by Horvitz & Thompson (1952) and extended to the proportional hazards setting for use in surveys by Binder (1992) and Lin (2000). In practice, the weights are often estimated and must be treated as such in order for the resulting inference to be accurate. The authors propose a two‐stage weighted proportional hazards model in which, at the first stage, weights are estimated through a logistic regression model fitted to a representative sample from the target population. At the second stage, a weighted Cox model is fitted to the biased sample. The authors propose estimators for the regression parameter and cumulative baseline hazard. They derive the asymptotic properties of the parameter estimators, accounting for the difference in the variance introduced by the randomness of the weights. They evaluate the accuracy of the asymptotic approximations in finite samples through simulation. They illustrate their approach in an analysis of renal transplant patients using data obtained from the Scientific Registry of Transplant Recipients  相似文献   

6.
We explore a class of vector smoothers based on local polynomial regression for fitting nonparametric regression models which have a vector response. The asymptotic bias and variance for the class of estimators are derived for two different ways of representing the variance matrices within both a seemingly unrelated regression and a vector measurement error framework. We show that the asymptotic behaviour of the estimators is different in these four cases. In addition, the placement of the kernel weights in weighted least squares estimators is very important in the seeming unrelated regressions problem (to ensure that the estimator is asymptotically unbiased) but not in the vector measurement error model. It is shown that the component estimators are asymptotically uncorrelated in the seemingly unrelated regressions model but asymptotically correlated in the vector measurement error model. These new and interesting results extend our understanding of the problem of smoothing dependent data.  相似文献   

7.
Improving Ratio Estimators of Second Order Point Process Characteristics   总被引:3,自引:0,他引:3  
Ripley's K function, the L function and the pair correlation function are important second order characteristics of spatial point processes. These functions are usually estimated by ratio estimators, where the numerators are Horvitz–Thompson edge corrected estimators and the denominators estimate the intensity or its square. It is possible to improve these estimators with respect to bias and estimation variance by means of adapted distance dependent intensity estimators. Further improvement is possible by means of refined estimators of the square of intensity. All this is shown by statistical analysis of simulated Poisson, cluster and hard core processes.  相似文献   

8.
Summary.  We estimate cause–effect relationships in empirical research where exposures are not completely controlled, as in observational studies or with patient non-compliance and self-selected treatment switches in randomized clinical trials. Additive and multiplicative structural mean models have proved useful for this but suffer from the classical limitations of linear and log-linear models when accommodating binary data. We propose the generalized structural mean model to overcome these limitations. This is a semiparametric two-stage model which extends the structural mean model to handle non-linear average exposure effects. The first-stage structural model describes the causal effect of received exposure by contrasting the means of observed and potential exposure-free outcomes in exposed subsets of the population. For identification of the structural parameters, a second stage 'nuisance' model is introduced. This takes the form of a classical association model for expected outcomes given observed exposure. Under the model, we derive estimating equations which yield consistent, asymptotically normal and efficient estimators of the structural effects. We examine their robustness to model misspecification and construct robust estimators in the absence of any exposure effect. The double-logistic structural mean model is developed in more detail to estimate the effect of observed exposure on the success of treatment in a randomized controlled blood pressure reduction trial with self-selected non-compliance.  相似文献   

9.
Most of the long memory estimators for stationary fractionally integrated time series models are known to experience non‐negligible bias in small and finite samples. Simple moment estimators are also vulnerable to such bias, but can easily be corrected. In this article, the authors propose bias reduction methods for a lag‐one sample autocorrelation‐based moment estimator. In order to reduce the bias of the moment estimator, the authors explicitly obtain the exact bias of lag‐one sample autocorrelation up to the order n−1. An example where the exact first‐order bias can be noticeably more accurate than its asymptotic counterpart, even for large samples, is presented. The authors show via a simulation study that the proposed methods are promising and effective in reducing the bias of the moment estimator with minimal variance inflation. The proposed methods are applied to the northern hemisphere data. The Canadian Journal of Statistics 37: 476–493; 2009 © 2009 Statistical Society of Canada  相似文献   

10.
When multilevel models are estimated from survey data derived using multistage sampling, unequal selection probabilities at any stage of sampling may induce bias in standard estimators, unless the sources of the unequal probabilities are fully controlled for in the covariates. This paper proposes alternative ways of weighting the estimation of a two-level model by using the reciprocals of the selection probabilities at each stage of sampling. Consistent estimators are obtained when both the sample number of level 2 units and the sample number of level 1 units within sampled level 2 units increase. Scaling of the weights is proposed to improve the properties of the estimators and to simplify computation. Variance estimators are also proposed. In a limited simulation study the scaled weighted estimators are found to perform well, although non-negligible bias starts to arise for informative designs when the sample number of level 1 units becomes small. The variance estimators perform extremely well. The procedures are illustrated using data from the survey of psychiatric morbidity.  相似文献   

11.
Inverse probability weighting (IPW) can deal with confounding in non randomized studies. The inverse weights are probabilities of treatment assignment (propensity scores), estimated by regressing assignment on predictors. Problems arise if predictors can be missing. Solutions previously proposed include assuming assignment depends only on observed predictors and multiple imputation (MI) of missing predictors. For the MI approach, it was recommended that missingness indicators be used with the other predictors. We determine when the two MI approaches, (with/without missingness indicators) yield consistent estimators and compare their efficiencies.We find that, although including indicators can reduce bias when predictors are missing not at random, it can induce bias when they are missing at random. We propose a consistent variance estimator and investigate performance of the simpler Rubin’s Rules variance estimator. In simulations we find both estimators perform well. IPW is also used to correct bias when an analysis model is fitted to incomplete data by restricting to complete cases. Here, weights are inverse probabilities of being a complete case. We explain how the same MI methods can be used in this situation to deal with missing predictors in the weight model, and illustrate this approach using data from the National Child Development Survey.  相似文献   

12.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

13.
It has been demonstrated in the literature that local polynomial models may be used to estimate the size of an open population using capture–recapture data. However, very little is known about their properties. Here we develop a setting in which the properties of nonparametric estimators of the size of an open population using capture–recapture data can be examined and establish conditions under which expressions for the bias and variance may be determined.  相似文献   

14.
In this paper, a new small domain estimator for area-level data is proposed. The proposed estimator is driven by a real problem of estimating the mean price of habitation transaction at a regional level in a European country, using data collected from a longitudinal survey conducted by a national statistical office. At the desired level of inference, it is not possible to provide accurate direct estimates because the sample sizes in these domains are very small. An area-level model with a heterogeneous covariance structure of random effects assists the proposed combined estimator. This model is an extension of a model due to Fay and Herriot [5], but it integrates information across domains and over several periods of time. In addition, a modified method of estimation of variance components for time-series and cross-sectional area-level models is proposed by including the design weights. A Monte Carlo simulation, based on real data, is conducted to investigate the performance of the proposed estimators in comparison with other estimators frequently used in small area estimation problems. In particular, we compare the performance of these estimators with the estimator based on the Rao–Yu model [23]. The simulation study also accesses the performance of the modified variance component estimators in comparison with the traditional ANOVA method. Simulation results show that the estimators proposed perform better than the other estimators in terms of both precision and bias.  相似文献   

15.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Single‐index models provide one way of reducing the dimension in regression analysis. The statistical literature has focused mainly on estimating the index coefficients, the mean function, and their asymptotic properties. For accurate statistical inference it is equally important to estimate the error variance of these models. We examine two estimators of the error variance in a single‐index model and compare them with a few competing estimators with respect to their corresponding asymptotic properties. Using a simulation study, we evaluate the finite‐sample performance of our estimators against their competitors.  相似文献   

17.
Empirical Bayes (EB) estimates in general linear mixed models are useful for the small area estimation in the sense of increasing precision of estimation of small area means. However, one potential difficulty of EB is that the overall estimate for a larger geographical area based on a (weighted) sum of EB estimates is not necessarily identical to the corresponding direct estimate such as the overall sample mean. Another difficulty is that EB estimates yield over‐shrinking, which results in the sampling variance smaller than the posterior variance. One way to fix these problems is the benchmarking approach based on the constrained empirical Bayes (CEB) estimators, which satisfy the constraints that the aggregated mean and variance are identical to the requested values of mean and variance. In this paper, we treat the general mixed models, derive asymptotic approximations of the mean squared error (MSE) of CEB and provide second‐order unbiased estimators of MSE based on the parametric bootstrap method. These results are applied to natural exponential families with quadratic variance functions. As a specific example, the Poisson‐gamma model is dealt with, and it is illustrated that the CEB estimates and their MSE estimates work well through real mortality data.  相似文献   

18.
We extend nonparametric regression models with local linear least squares fitting using kernel weights to the case of linear and circular predictors. We derive the asymptotic properties of the conditional bias and variance of bivariate local linear least squares kernel estimators. A small simulation study and a real experiment are given.  相似文献   

19.
For nonparametric regression models with fixed and random design, two classes of estimators for the error variance have been introduced: second sample moments based on residuals from a nonparametric fit, and difference-based estimators. The former are asymptotically optimal but require estimating the regression function; the latter are simple but have larger asymptotic variance. For nonparametric regression models with random covariates, we introduce a class of estimators for the error variance that are related to difference-based estimators: covariate-matched U-statistics. We give conditions on the random weights involved that lead to asymptotically optimal estimators of the error variance. Our explicit construction of the weights uses a kernel estimator for the covariate density.  相似文献   

20.
Over the past decades, various principles for causal effect estimation have been proposed, all differing in terms of how they adjust for measured confounders: either via traditional regression adjustment, by adjusting for the expected exposure given those confounders (e.g., the propensity score), or by inversely weighting each subject's data by the likelihood of the observed exposure, given those confounders. When the exposure is measured with error, this raises the question whether these different estimation strategies might be differently affected and whether one of them is to be preferred for that reason. In this article, we investigate this by comparing inverse probability of treatment weighted (IPTW) estimators and doubly robust estimators for the exposure effect in linear marginal structural mean models (MSM) with G-estimators, propensity score (PS) adjusted estimators and ordinary least squares (OLS) estimators for the exposure effect in linear regression models. We find analytically that these estimators are equally affected when exposure misclassification is independent of the confounders, but not otherwise. Simulation studies reveal similar results for time-varying exposures and when the model of interest includes a logistic link.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号