首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 922 毫秒
1.
In randomized trials, investigators are frequently interested in estimating the direct effect of a treatment on an outcome that is not relayed by intermediate variables, in addition to the usual intention-to-treat (ITT) effect. Even if the ITT effect is not confounded due to randomization, the direct effect is not identified when unmeasured variables affect the intermediate and outcome variables. Although the unmeasured variables cannot be adjusted for in the models, it is still important to evaluate the potential bias of these variables quantitatively. This article proposes a sensitivity analysis method for controlled direct effects using a marginal structural model that is an extension of the sensitivity analysis method of unmeasured confounding introduced in the context of observational studies. The proposed method is illustrated using a randomized trial of depression.  相似文献   

2.
Intention‐to‐treat (ITT) analysis is widely used to establish efficacy in randomized clinical trials. However, in a long‐term outcomes study where non‐adherence to study drug is substantial, the on‐treatment effect of the study drug may be underestimated using the ITT analysis. The analyses presented herein are from the EVOLVE trial, a double‐blind, placebo‐controlled, event‐driven cardiovascular outcomes study conducted to assess whether a treatment regimen including cinacalcet compared with placebo in addition to other conventional therapies reduces the risk of mortality and major cardiovascular events in patients receiving hemodialysis with secondary hyperparathyroidism. Pre‐specified sensitivity analyses were performed to assess the impact of non‐adherence on the estimated effect of cinacalcet. These analyses included lag‐censoring, inverse probability of censoring weights (IPCW), rank preserving structural failure time model (RPSFTM) and iterative parameter estimation (IPE). The relative hazard (cinacalcet versus placebo) of mortality and major cardiovascular events was 0.93 (95% confidence interval 0.85, 1.02) using the ITT analysis; 0.85 (0.76, 0.95) using lag‐censoring analysis; 0.81 (0.70, 0.92) using IPCW; 0.85 (0.66, 1.04) using RPSFTM and 0.85 (0.75, 0.96) using IPE. These analyses, while not providing definitive evidence, suggest that the intervention may have an effect while subjects are receiving treatment. The ITT method remains the established method to evaluate efficacy of a new treatment; however, additional analyses should be considered to assess the on‐treatment effect when substantial non‐adherence to study drug is expected or observed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Intent‐to‐treat (ITT) analysis is viewed as the analysis of a clinical trial that provides the least bias, but difficult issues can arise. Common analysis methods such as mixed‐effects and proportional hazards models are usually labeled as ITT analysis, but in practice they can often be inconsistent with a strict interpretation of the ITT principle. In trials where effective medications are available to patients withdrawing from treatment, ITT analysis can mask important therapeutic effects of the intervention studied in the trial. Analysis of on‐treatment data may be subject to bias, but can address efficacy objectives when combined with careful review of the pattern of withdrawals across treatments particularly for those patients withdrawing due to lack of efficacy and adverse events. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
The odds ratio (OR) has been recommended elsewhere to measure the relative treatment efficacy in a randomized clinical trial (RCT), because it possesses a few desirable statistical properties. In practice, it is not uncommon to come across an RCT in which there are patients who do not comply with their assigned treatments and patients whose outcomes are missing. Under the compound exclusion restriction, latent ignorable and monotonicity assumptions, we derive the maximum likelihood estimator (MLE) of the OR and apply Monte Carlo simulation to compare its performance with those of the other two commonly used estimators for missing completely at random (MCAR) and for the intention-to-treat (ITT) analysis based on patients with known outcomes, respectively. We note that both estimators for MCAR and the ITT analysis may produce a misleading inference of the OR even when the relative treatment effect is equal. We further derive three asymptotic interval estimators for the OR, including the interval estimator using Wald’s statistic, the interval estimator using the logarithmic transformation, and the interval estimator using an ad hoc procedure of combining the above two interval estimators. On the basis of a Monte Carlo simulation, we evaluate the finite-sample performance of these interval estimators in a variety of situations. Finally, we use the data taken from a randomized encouragement design studying the effect of flu shots on the flu-related hospitalization rate to illustrate the use of the MLE and the asymptotic interval estimators for the OR developed here.  相似文献   

5.
A clinical hold order by the Food and Drug Administration (FDA) to the sponsor of a clinical trial is a measure to delay a proposed or to suspend an ongoing clinical investigation. The phase III clinical trial START serves as motivating data example to explore implications and potential statistical approaches for a trial continuing after a clinical hold is lifted. In spite of a modified intention‐to‐treat (ITT) analysis introduced to account for the clinical hold by excluding patients potentially affected most by the clinical hold, results of the trial did not show a significant improvement of overall survival duration, and the question remains whether the negative result was an effect of the clinical hold. In this paper, we propose a multistate model incorporating the clinical hold as well as disease progression as intermediate events to investigate the impact of the clinical hold on the treatment effect. Moreover, we consider a simple counterfactual censoring approach as alternative strategy to the modified ITT analysis to deal with a clinical hold. Using a realistic simulation study informed by the START data and with a design based on our multistate model, we show that the modified ITT analysis used in the START trial was reasonable. However, the censoring approach will be shown to have some benefits in terms of power and flexibility.  相似文献   

6.
Biostatisticians recognize the importance of precise definitions of technical terms in randomized controlled clinical trial (RCCT) protocols, statistical analysis plans, and so on, in part because definitions are a foundation for subsequent actions. Imprecise definitions can be a source of controversies about appropriate statistical methods, interpretation of results, and extrapolations to larger populations. This paper presents precise definitions of some familiar terms and definitions of some new terms, some perhaps controversial. The glossary contains definitions that can be copied into a protocol, statistical analysis plan, or similar document and customized. The definitions were motivated and illustrated in the context of a longitudinal RCCT in which some randomized enrollees are non‐adherent, receive a corrupted treatment, or withdraw prematurely. The definitions can be adapted for use in a much wider set of RCCTs. New terms can be used in place of controversial terms, for example, subject. We define terms specifying a person's progress through RCCT phases and that precisely define the RCCT's phases and milestones. We define terms that distinguish between subsets of an RCCT's enrollees and a much larger patient population. ‘The intention‐to‐treat (ITT) principle’ has multiple interpretations that can be distilled to the definitions of the ‘ITT analysis set of randomized enrollees’. Most differences among interpretations of ‘the’ ITT principle stem from an RCCT's primary objective (mainly efficacy versus effectiveness). Four different ‘authoritative’ definitions of ITT analysis set of randomized enrollees illustrate the variety of interpretations. We propose a separate specification of the analysis set of data that will be used in a specific analysis. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
Increasing-block prices are common in markets for water, cellular phone service, and retail electricity. This study estimates demand models under block prices and conducts a Monte Carlo experiment to test the small-sample bias of structural and instrumental variables (IV) estimators. We estimate the price and income elasticity of water demand under increasing-block prices using a structural discrete/continuous choice (DCC) model, as well as random effects and IV. Elasticity estimates are sensitive to the modeling framework. The Monte Carlo experiment suggests that IV and DCC models estimate both price and income elasticity with bias, with no clear best choice among estimators.  相似文献   

8.
This paper proposes a combination of the particle-filter-based method and the expectation-maximization algorithm (PFEM), in order to filter unobservable variables and hence, to reduce the omitted variables bias. Furthermore, I consider as an unobservable variable, an exogenous one that can be used as an instrument in the instrumental variable (IV) methodology. The aim is to show that the PFEM is able to eliminate or reduce both the omitted variable bias and the simultaneous equation bias by filtering the omitted variable and the unobserved instrument, respectively. In other words, the procedure provides (at least approximately) consistent estimates, without using additional information embedded in the omitted variable or in the instruments, since they are filtered by the observable variables. The validity of the procedure is shown both through simulations and through a comparison to an IV analysis which appeared in an important previous publication. As regards the latter point, I demonstrate that the procedure developed in this article yields similar results to those of the original IV analysis.  相似文献   

9.
In the recovery of interblock information to improve the treatment differences estimates in incomplete block designs, the parameter p is usually unknown. Many authors have worked on the problem of estimating it and of studying its properties together with the properties of the treatment differences estimates. In this paper a numerically efficient algorithm is developed which yields the maximum likelihood estimates (MLE) of all the parameters in the mixed incomplete block design model (treatment effects, ρ and variance)  相似文献   

10.
The symmetric treatment of an asymmetric approach to factor analysis is shown to provide accurate communality estimates. In comparison with existing estimates including upper and lower bounds, the present approach is shown to be superior. In one example with known communalities, the present approach perfectly captures those communalities. In two empirical examples, it is shown to produce better communality estimates than any existing method.  相似文献   

11.
In two observational studies, one investigating the effects of minimum wage laws on employment and the other of the effects of exposures to lead, an estimated treatment effect's sensitivity to hidden bias is examined. The estimate uses the combined quantile averages that were introduced in 1981 by B. M. Brown as simple, efficient, robust estimates of location admitting both exact and approximate confidence intervals and significance tests. Closely related to Gastwirth's estimate and Tukey's trimean, the combined quantile average has asymptotic efficiency for normal data that is comparable with that of a 15% trimmed mean, and higher efficiency than the trimean, but it has resistance to extreme observations or breakdown comparable with that of the trimean and better than the 15% trimmed mean. Combined quantile averages provide consistent estimates of an additive treatment effect in a matched randomized experiment. Sensitivity analyses are discussed for combined quantile averages when used in a matched observational study in which treatments are not randomly assigned. In a sensitivity analysis in an observational study, subjects are assumed to differ with respect to an unobserved covariate that was not adequately controlled by the matching, so that treatments are assigned within pairs with probabilities that are unequal and unknown. The sensitivity analysis proposed here uses significance levels, point estimates and confidence intervals based on combined quantile averages and examines how these inferences change under a range of assumptions about biases due to an unobserved covariate. The procedures are applied in the studies of minimum wage laws and exposures to lead. The first example is also used to illustrate sensitivity analysis with an instrumental variable.  相似文献   

12.
In a randomized controlled trial (RCT), it is possible to improve precision and power and reduce sample size by appropriately adjusting for baseline covariates. There are multiple statistical methods to adjust for prognostic baseline covariates, such as an ANCOVA method. In this paper, we propose a clustering-based stratification method for adjusting for the prognostic baseline covariates. Clusters (strata) are formed only based on prognostic baseline covariates, not outcome data nor treatment assignment. Therefore, the clustering procedure can be completed prior to the availability of outcome data. The treatment effect is estimated in each cluster, and the overall treatment effect is derived by combining all cluster-specific treatment effect estimates. The proposed implementation of the procedure is described. Simulations studies and an example are presented.  相似文献   

13.
When modeling multilevel data, it is important to accurately represent the interdependence of observations within clusters. Ignoring data clustering may result in parameter misestimation. However, it is not well established to what degree parameter estimates are affected by model misspecification when applying missing data techniques (MDTs) to incomplete multilevel data. We compare the performance of three MDTs with incomplete hierarchical data. We consider the impact of imputation model misspecification on the quality of parameter estimates by employing multiple imputation under assumptions of a normal model (MI/NM) with two-level cross-sectional data when values are missing at random on the dependent variable at rates of 10%, 30%, and 50%. Five criteria are used to compare estimates from MI/NM to estimates from MI assuming a linear mixed model (MI/LMM) and maximum likelihood estimation to the same incomplete data sets. With 10% missing data (MD), techniques performed similarly for fixed-effects estimates, but variance components were biased with MI/NM. Effects of model misspecification worsened at higher rates of MD, with the hierarchical structure of the data markedly underrepresented by biased variance component estimates. MI/LMM and maximum likelihood provided generally accurate and unbiased parameter estimates but performance was negatively affected by increased rates of MD.  相似文献   

14.
We wish to model pulse wave velocity (PWV) as a function of longitudinal measurements of pulse pressure (PP) at the same and prior visits at which the PWV is measured. A number of approaches are compared. First, we use the PP at the same visit as the PWV in a linear regression model. In addition, we use the average of all available PPs as the explanatory variable in a linear regression model. Next, a two-stage process is applied. The longitudinal PP is modeled using a linear mixed-effects model. This modeled PP is used in the regression model to describe PWV. An approach for using the longitudinal PP data is to obtain a measure of the cumulative burden, the area under the PP curve. This area under the curve is used as an explanatory variable to model PWV. Finally, a joint Bayesian model is constructed similar to the two-stage model.  相似文献   

15.
We consider designs for which the treatment association matrices for the row design and for the column design commute. For these designs it is shown that the usual procedures of combined estimation yield unbiased estimates of treatment differences. For an important special class of designs a procedure of combined estimation is proposed which assures improvement over the estimates obtained from the interaction analysis.  相似文献   

16.
Abstract

Experiments in various countries with “last week” and “last month” reference periods for reporting of households’ food consumption have generally found that “week”-based estimates are higher. In India the National Sample Survey (NSS) has consistently found that “week”-based estimates are higher than month-based estimates for a majority of food item groups. But why are week-based estimates higher than month-based estimates? It has long been believed that the reason must be recall lapse, inherent in a long reporting period such as a month. But is household consumption of a habitually consumed item “recalled” in the same way as that of an item of infrequent consumption? And why doesn’t memory lapse cause over-reporting (over-assessment) as often as under-reporting? In this paper, we provide an alternative hypothesis, involving a “quantity floor effect” in reporting behavior, under which “week” may cause over-reporting for many items. We design a test to detect the effect postulated by this hypothesis and carry it out on NSS 68th round HCES data. The test results strongly suggest that our hypothesis provides a better explanation of the difference between week-based and month-based estimates than the recall lapse theory.  相似文献   

17.
18.
Brown and Cohen (1974) considered the problem of interval estimation of the common mean of two normal populations based on independent random samples. They showed that if we take the usual confidence interval using the first sample only and centre it around an appropriate combined estimate of the common mean the resulting interval would contain the true value with higher probability. They also gave a sufficient condition which such a point estimate should satisfy. Bhattacharya and Shah (1978) showed that the estimates satisfying this condition are nearly identical to the mean of the first sample. In this paper we obtain a stronger sufficient condition which is satisfied by many point estimates when the size of the second sample exceeds ten.  相似文献   

19.
Summary.  Statistical agencies make changes to the data collection methodology of their surveys to improve the quality of the data collected or to improve the efficiency with which they are collected. For reasons of cost it may not be possible to estimate the effect of such a change on survey estimates or response rates reliably, without conducting an experiment that is embedded in the survey which involves enumerating some respondents by using the new method and some under the existing method. Embedded experiments are often designed for repeated and overlapping surveys; however, previous methods use sample data from only one occasion. The paper focuses on estimating the effect of a methodological change on estimates in the case of repeated surveys with overlapping samples from several occasions. Efficient design of an embedded experiment that covers more than one time point is also mentioned. All inference is unbiased over an assumed measurement model, the experimental design and the complex sample design. Other benefits of the approach proposed include the following: it exploits the correlation between the samples on each occasion to improve estimates of treatment effects; treatment effects are allowed to vary over time; it is robust against incorrectly rejecting the null hypothesis of no treatment effect; it allows a wide set of alternative experimental designs. This paper applies the methodology proposed to the Australian Labour Force Survey to measure the effect of replacing pen-and-paper interviewing with computer-assisted interviewing. This application considered alternative experimental designs in terms of their statistical efficiency and their risks to maintaining a consistent series. The approach proposed is significantly more efficient than using only 1 month of sample data in estimation.  相似文献   

20.
There has been much work on the use of neighbouring plots to control environmental variation in the analysis of agricultural field experiments. In particular, the Residual Maximum Likelihood Neighbour (REMLN) analysis of Gleeson&Cullis (1987) appears very promising. The application of the REMLN analysis to an unequally replicated field trial augmented with an additional variety planted every six plots in a grid system is here compared with a covariance (COV) analysis using the neighbouring grid or check plot values as the covariate. The results indicate that the REMLN analysis gives more accurate estimates of treatment contrasts than the COV analyses, but that the estimate of treatment means can be biased. The bias depends on the mean of the check plot. This bias can be removed by adjusting the estimates of the treatment means such that their average equals the average of the raw means rather than that of the raw data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号