首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clinical studies aimed at identifying effective treatments to reduce the risk of disease or death often require long term follow-up of participants in order to observe a sufficient number of events to precisely estimate the treatment effect. In such studies, observing the outcome of interest during follow-up may be difficult and high rates of censoring may be observed which often leads to reduced power when applying straightforward statistical methods developed for time-to-event data. Alternative methods have been proposed to take advantage of auxiliary information that may potentially improve efficiency when estimating marginal survival and improve power when testing for a treatment effect. Recently, Parast et al. (J Am Stat Assoc 109(505):384–394, 2014) proposed a landmark estimation procedure for the estimation of survival and treatment effects in a randomized clinical trial setting and demonstrated that significant gains in efficiency and power could be obtained by incorporating intermediate event information as well as baseline covariates. However, the procedure requires the assumption that the potential outcomes for each individual under treatment and control are independent of treatment group assignment which is unlikely to hold in an observational study setting. In this paper we develop the landmark estimation procedure for use in an observational setting. In particular, we incorporate inverse probability of treatment weights (IPTW) in the landmark estimation procedure to account for selection bias on observed baseline (pretreatment) covariates. We demonstrate that consistent estimates of survival and treatment effects can be obtained by using IPTW and that there is improved efficiency by using auxiliary intermediate event and baseline information. We compare our proposed estimates to those obtained using the Kaplan–Meier estimator, the original landmark estimation procedure, and the IPTW Kaplan–Meier estimator. We illustrate our resulting reduction in bias and gains in efficiency through a simulation study and apply our procedure to an AIDS dataset to examine the effect of previous antiretroviral therapy on survival.  相似文献   

2.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.

We present a new estimator of the restricted mean survival time in randomized trials where there is right censoring that may depend on treatment and baseline variables. The proposed estimator leverages prognostic baseline variables to obtain equal or better asymptotic precision compared to traditional estimators. Under regularity conditions and random censoring within strata of treatment and baseline variables, the proposed estimator has the following features: (i) it is interpretable under violations of the proportional hazards assumption; (ii) it is consistent and at least as precise as the Kaplan–Meier and inverse probability weighted estimators, under identifiability conditions; (iii) it remains consistent under violations of independent censoring (unlike the Kaplan–Meier estimator) when either the censoring or survival distributions, conditional on covariates, are estimated consistently; and (iv) it achieves the nonparametric efficiency bound when both of these distributions are consistently estimated. We illustrate the performance of our method using simulations based on resampling data from a completed, phase 3 randomized clinical trial of a new surgical treatment for stroke; the proposed estimator achieves a 12% gain in relative efficiency compared to the Kaplan–Meier estimator. The proposed estimator has potential advantages over existing approaches for randomized trials with time-to-event outcomes, since existing methods either rely on model assumptions that are untenable in many applications, or lack some of the efficiency and consistency properties (i)–(iv). We focus on estimation of the restricted mean survival time, but our methods may be adapted to estimate any treatment effect measure defined as a smooth contrast between the survival curves for each study arm. We provide R code to implement the estimator.

  相似文献   

4.
The odds ratio (OR) has been recommended elsewhere to measure the relative treatment efficacy in a randomized clinical trial (RCT), because it possesses a few desirable statistical properties. In practice, it is not uncommon to come across an RCT in which there are patients who do not comply with their assigned treatments and patients whose outcomes are missing. Under the compound exclusion restriction, latent ignorable and monotonicity assumptions, we derive the maximum likelihood estimator (MLE) of the OR and apply Monte Carlo simulation to compare its performance with those of the other two commonly used estimators for missing completely at random (MCAR) and for the intention-to-treat (ITT) analysis based on patients with known outcomes, respectively. We note that both estimators for MCAR and the ITT analysis may produce a misleading inference of the OR even when the relative treatment effect is equal. We further derive three asymptotic interval estimators for the OR, including the interval estimator using Wald’s statistic, the interval estimator using the logarithmic transformation, and the interval estimator using an ad hoc procedure of combining the above two interval estimators. On the basis of a Monte Carlo simulation, we evaluate the finite-sample performance of these interval estimators in a variety of situations. Finally, we use the data taken from a randomized encouragement design studying the effect of flu shots on the flu-related hospitalization rate to illustrate the use of the MLE and the asymptotic interval estimators for the OR developed here.  相似文献   

5.
The Dabrowska (Ann Stat 16:1475–1489, 1988) product integral representation of the multivariate survivor function is extended, leading to a nonparametric survivor function estimator for an arbitrary number of failure time variates that has a simple recursive formula for its calculation. Empirical process methods are used to sketch proofs for this estimator’s strong consistency and weak convergence properties. Summary measures of pairwise and higher-order dependencies are also defined and nonparametrically estimated. Simulation evaluation is given for the special case of three failure time variates.  相似文献   

6.
We developed methods for estimating the causal risk difference and causal risk ratio in randomized trials with noncompliance. The developed estimator is unbiased under the assumption that biases due to noncompliance are identical between both treatment arms. The biases are defined as the difference or ratio between the expectations of potential outcomes for a group that received the test treatment and that for the control group in each randomly assigned group. Although the instrumental variable estimator yields an unbiased estimate under a sharp null hypothesis but may yield a biased estimate under a non-null hypothesis, the bias of the developed estimator does not depend on whether this hypothesis holds. Then the estimate of the causal effect from the developed estimator may have a smaller bias than that from the instrumental variable estimator when the treatment effect exists. There is not yet a standard method for coping with noncompliance, and thus it is important to evaluate estimates under different assumptions. The developed estimator can serve this purpose. Its application to a field trial for coronary heart disease is provided.  相似文献   

7.
ABSTRACT

In this article, we propose a more general criterion called Sp -criterion, for subset selection in the multiple linear regression Model. Many subset selection methods are based on the Least Squares (LS) estimator of β, but whenever the data contain an influential observation or the distribution of the error variable deviates from normality, the LS estimator performs ‘poorly’ and hence a method based on this estimator (for example, Mallows’ Cp -criterion) tends to select a ‘wrong’ subset. The proposed method overcomes this drawback and its main feature is that it can be used with any type of estimator (either the LS estimator or any robust estimator) of β without any need for modification of the proposed criterion. Moreover, this technique is operationally simple to implement as compared to other existing criteria. The method is illustrated with examples.  相似文献   

8.
We propose a modification of local polynomial estimation which improves the efficiency of the conventional method when the observation errors are correlated. The procedure is based on a pre-transformation of the data as a generalization of the pre-whitening procedure introduced by Xiao et al. [(2003), ‘More Efficient Local Polynomial Estimation in Nonparametric Regression with Autocorrelated Errors’, Journal of the American Statistical Association, 98, 980–992]. While these authors assumed a linear process representation for the error process, we avoid any structural assumption. We further allow the regressors and the errors to be dependent. More importantly, we show that the inclusion of both leading and lagged variables in the approximation of the error terms outperforms the best approximation based on lagged variables only. Establishing its asymptotic distribution, we show that the proposed estimator is more efficient than the standard local polynomial estimator. As a by-product we prove a suitable version of a central limit theorem which allows us to improve the asymptotic normality result for local polynomial estimators by Masry and Fan [(1997), ‘Local Polynomial Estimation of Regression Functions for Mixing Processes’, Scandinavian Journal of Statistics, 24, 165–179]. A simulation study confirms the efficiency of our estimator on finite samples. An application to climate data also shows that our new method leads to an estimator with decreased variability.  相似文献   

9.
We consider the problem of estimating a regression function when a covariate is measured with error. Using the local polynomial estimator of Delaigle et al. [(2009), ‘A Design-adaptive Local Polynomial Estimator for the Errors-in-variables Problem’, Journal of the American Statistical Association, 104, 348–359] as a benchmark, we propose an alternative way of solving the problem without transforming the kernel function. The asymptotic properties of the alternative estimator are rigorously studied. A detailed implementing algorithm and a computationally efficient bandwidth selection procedure are also provided. The proposed estimator is compared with the existing local polynomial estimator via extensive simulations and an application to the motorcycle crash data. The results show that the new estimator can be less biased than the existing estimator and is numerically more stable.  相似文献   

10.
Cui  Ruifei  Groot  Perry  Heskes  Tom 《Statistics and Computing》2019,29(2):311-333

We consider the problem of causal structure learning from data with missing values, assumed to be drawn from a Gaussian copula model. First, we extend the ‘Rank PC’ algorithm, designed for Gaussian copula models with purely continuous data (so-called nonparanormal models), to incomplete data by applying rank correlation to pairwise complete observations and replacing the sample size with an effective sample size in the conditional independence tests to account for the information loss from missing values. When the data are missing completely at random (MCAR), we provide an error bound on the accuracy of ‘Rank PC’ and show its high-dimensional consistency. However, when the data are missing at random (MAR), ‘Rank PC’ fails dramatically. Therefore, we propose a Gibbs sampling procedure to draw correlation matrix samples from mixed data that still works correctly under MAR. These samples are translated into an average correlation matrix and an effective sample size, resulting in the ‘Copula PC’ algorithm for incomplete data. Simulation study shows that: (1) ‘Copula PC’ estimates a more accurate correlation matrix and causal structure than ‘Rank PC’ under MCAR and, even more so, under MAR and (2) the usage of the effective sample size significantly improves the performance of ‘Rank PC’ and ‘Copula PC.’ We illustrate our methods on two real-world datasets: riboflavin production data and chronic fatigue syndrome data.

  相似文献   

11.
In single-arm clinical trials with survival outcomes, the Kaplan–Meier estimator and its confidence interval are widely used to assess survival probability and median survival time. Since the asymptotic normality of the Kaplan–Meier estimator is a common result, the sample size calculation methods have not been studied in depth. An existing sample size calculation method is founded on the asymptotic normality of the Kaplan–Meier estimator using the log transformation. However, the small sample properties of the log transformed estimator are quite poor in small sample sizes (which are typical situations in single-arm trials), and the existing method uses an inappropriate standard normal approximation to calculate sample sizes. These issues can seriously influence the accuracy of results. In this paper, we propose alternative methods to determine sample sizes based on a valid standard normal approximation with several transformations that may give an accurate normal approximation even with small sample sizes. In numerical evaluations via simulations, some of the proposed methods provided more accurate results, and the empirical power of the proposed method with the arcsine square-root transformation tended to be closer to a prescribed power than the other transformations. These results were supported when methods were applied to data from three clinical trials.  相似文献   

12.
This paper extends an existing outlier-robust estimator of linear dynamic panel data models with fixed effects, which is based on the median ratio of two consecutive pairs of first-order differenced data. To improve its precision and robustness properties, a general procedure based on higher-order pairwise differences and their ratios is designed. The asymptotic distribution of this class of estimators is derived. Further, the breakdown point properties are obtained under contamination by independent additive outliers and by the patches of additive outliers, and are used to select the pairwise differences that do not compromise the robustness properties of the procedure. The proposed estimator is additionally compared with existing methods by means of Monte Carlo simulations.  相似文献   

13.
In this paper we are interested in the derivation of the asymptotic and finite-sample distributional properties of a ‘quasi-maximum likelihood’ estimator of a ‘scale’ second-order parameter β, directly based on the log-excesses of an available sample. Such estimation is of primordial importance for the adaptive selection of the optimal sample fraction to be used in the classical semi-parametric tail index estimation as well as for the reduced-bias estimation of the tail index, high quantiles and other parameters of extreme or even rare events. An application in the area of survival analysis is provided, on the basis of a data set on males diagnosed with cancer of the tongue.  相似文献   

14.
A two–sample test statistic for detecting shifts in location is developed for a broad range of underlying distributions using adaptive techniques. The test statistic is a linear rank statistics which uses a simple modification of the Wilcoxon test; the scores are Winsorized ranks where the upper and lower Winsorinzing proportions are estimated in the first stage of the adaptive procedure using sample the first stage of the adaptive procedure using sample measures of the distribution's skewness and tailweight. An empirical relationship between the Winsorizing proportions and the sample skewness and tailweight allows for a ‘continuous’ adaptation of the test statistic to the data. The test has good asymptotic properties, and the small sample results are compared with other populatr parametric, nonparametric, and two–stage tests using Monte Carlo methods. Based on these results, this proposed test procedure is recommended for moderate and larger sample sizes.  相似文献   

15.
A robust rank-based estimator for variable selection in linear models, with grouped predictors, is studied. The proposed estimation procedure extends the existing rank-based variable selection [Johnson, B.A., and Peng, L. (2008), ‘Rank-based Variable Selection’, Journal of Nonparametric Statistics, 20(3):241–252] and the ww-scad [Wang, L., and Li, R. (2009), ‘Weighted Wilcoxon-type Smoothly Clipped Absolute Deviation Method’, Biometrics, 65(2):564–571] to linear regression models with grouped variables. The resulting estimator is robust to contamination or deviations in both the response and the design space.The Oracle property and asymptotic normality of the estimator are established under some regularity conditions. Simulation studies reveal that the proposed method performs better than the existing rank-based methods [Johnson, B.A., and Peng, L. (2008), ‘Rank-based Variable Selection’, Journal of Nonparametric Statistics, 20(3):241–252; Wang, L., and Li, R. (2009), ‘Weighted Wilcoxon-type Smoothly Clipped Absolute Deviation Method’, Biometrics, 65(2):564–571] for grouped variables models. This estimation procedure also outperforms the adaptive hlasso [Zhou, N., and Zhu, J. (2010), ‘Group Variable Selection Via a Hierarchical Lasso and its Oracle Property’, Interface, 3(4):557–574] in the presence of local contamination in the design space or for heavy-tailed error distribution.  相似文献   

16.
In survival studies, current status data are frequently encountered when some individuals in a study are not successively observed. This paper considers the problem of simultaneous variable selection and parameter estimation in the high-dimensional continuous generalized linear model with current status data. We apply the penalized likelihood procedure with the smoothly clipped absolute deviation penalty to select significant variables and estimate the corresponding regression coefficients. With a proper choice of tuning parameters, the resulting estimator is shown to be a root n/pn-consistent estimator under some mild conditions. In addition, we show that the resulting estimator has the same asymptotic distribution as the estimator obtained when the true model is known. The finite sample behavior of the proposed estimator is evaluated through simulation studies and a real example.  相似文献   

17.
Rao (J. Indian Statist. Assoc. 17 (1979) 125) has given a ‘necessary form’ for an unbiased mean square error (MSE) estimator to be ‘uniformly non-negative’. The MSE is of a homogeneous linear estimator ‘subject to a specified constraint’, for a survey population total of a real variable of interest. We present a corresponding theorem when the ‘constraint’ is relaxed. Certain results are added presenting formulae for estimators of MSEs when the variate-values for the sampled individuals are not ascertainable. Though not ascertainable, they are supposed to be suitably estimated either by (1) randomized response techniques covering sensitive issues or by (2) further sampling in ‘subsequent’ stages in specific ways when the initial sampling units are composed of a number of sub-units. Using live numerical data, practical uses of the proposed alternative MSE estimators are demonstrated.  相似文献   

18.
Changes in survival rates during 1940–1992 for patients with Hodgkin's disease are studied by using population-based data. The aim of the analysis is to identify when the breakthrough in clinical trials of chemotherapy treatments started to increase population survival rates, and to find how long it took for the increase to level off, indicating that the full population effect of the breakthrough had been realized. A Weibull relative survival model is used because the model parameters are easily interpretable when assessing the effect of advances in clinical trials. However, the methods apply to any relative survival model that falls within the generalized linear models framework. The model is fitted by using modifications of existing software (SAS, GLIM) and profile likelihood methods. The results are similar to those from a cause-specific analysis of the data by Feuer and co-workers. Survival started to improve around the time that a major chemotherapy breakthrough (nitrogen mustard, Oncovin, prednisone and procarbazine) was publicized in the mid 1960s but did not level off for 11 years. For the analysis of data where the cause of death is obtained from death certificates, the relative survival approach has the advantage of providing the necessary adjustment for expected mortality from causes other than the disease without requiring information on the causes of death.  相似文献   

19.
In this paper we present an indirect estimation procedure for (ARFIMA) fractional time series models.The estimation method is based on an ‘incorrect’criterion which does not directly provide a consistent estimator of the parameters of interest,but leads to correct inference by using simulations.

The main steps are the following. First,we consider an auxiliary model which can be easily estimated.Specifically,we choose the finite lag Autoregressive model.Then, this is estimated on the observations and simulated values drawn from the ARFIMA model associated with a given value of the parameters of interest.Finally,the latter is calibrated in order to obtain close values of the two estimators of the auxiliary parameters.

In this article,we describe the estimation procedure and compare the performance of the indirect estimator with some alternative estimators based on the likelihood function by a Monte Carlo study.  相似文献   

20.
Mehrotra (1997) presented an ‘;improved’ Brown and Forsythe (1974) statistic which is designed to provide a valid test of mean equality in independent groups designs when variances are heterogeneous. In particular, the usual Brown and Fosythe procedure was modified by using a Satterthwaite approximation for numerator degrees of freedom instead of the usual value of number of groups minus one. Mehrotra then, through Monte Carlo methods, demonstrated that the ‘improved’ method resulted in a robust test of significance in cases where the usual Brown and Forsythe method did not. Accordingly, this ‘improved’ procedure was recommended. We show that under conditions likely to be encountered in applied settings, that is, conditions involving heterogeneous variances as well as nonnormal data, the ‘improved’ Brown and Forsythe procedure results in depressed or inflated rates of Type I error in unbalanced designs. Previous findings indicate, however, that one can obtain a robust test by adopting a heteroscedastic statistic with the robust estimators, rather than the usual least squares estimators, and further improvement can be expected when critical significance values are obtained through bootstrapping methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号